Test Report: Docker_Linux_crio 22000

                    
                      3f3a61283993ee602bd323c44b704727ac3a4ece:2025-11-29:42558
                    
                

Test fail (38/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.25
35 TestAddons/parallel/Registry 13.5
36 TestAddons/parallel/RegistryCreds 0.44
37 TestAddons/parallel/Ingress 147.09
38 TestAddons/parallel/InspektorGadget 5.25
39 TestAddons/parallel/MetricsServer 5.31
41 TestAddons/parallel/CSI 42.34
42 TestAddons/parallel/Headlamp 2.74
43 TestAddons/parallel/CloudSpanner 5.26
44 TestAddons/parallel/LocalPath 8.12
45 TestAddons/parallel/NvidiaDevicePlugin 5.26
46 TestAddons/parallel/Yakd 5.26
47 TestAddons/parallel/AmdGpuDevicePlugin 5.26
97 TestFunctional/parallel/ServiceCmdConnect 602.9
116 TestFunctional/parallel/ImageCommands/ImageListShort 2.29
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.07
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.24
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.05
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
137 TestFunctional/parallel/ServiceCmd/DeployApp 600.6
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
153 TestFunctional/parallel/ServiceCmd/Format 0.53
154 TestFunctional/parallel/ServiceCmd/URL 0.55
191 TestJSONOutput/pause/Command 2.33
197 TestJSONOutput/unpause/Command 1.58
292 TestPause/serial/Pause 5.8
348 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.39
351 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.42
359 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.68
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.66
371 TestStartStop/group/old-k8s-version/serial/Pause 5.82
374 TestStartStop/group/no-preload/serial/Pause 7.7
381 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.19
384 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.3
386 TestStartStop/group/embed-certs/serial/Pause 5.83
393 TestStartStop/group/newest-cni/serial/Pause 6.05
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 addons disable volcano --alsologtostderr -v=1: exit status 11 (253.880077ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:30:25.689952   18437 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:30:25.690292   18437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:25.690306   18437 out.go:374] Setting ErrFile to fd 2...
	I1129 08:30:25.690313   18437 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:25.690568   18437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:30:25.690930   18437 mustload.go:66] Loading cluster: addons-053273
	I1129 08:30:25.691253   18437 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:25.691272   18437 addons.go:622] checking whether the cluster is paused
	I1129 08:30:25.691354   18437 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:25.691370   18437 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:30:25.691736   18437 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:30:25.710121   18437 ssh_runner.go:195] Run: systemctl --version
	I1129 08:30:25.710165   18437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:30:25.729587   18437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:30:25.829474   18437 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:30:25.829558   18437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:30:25.858055   18437 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:30:25.858075   18437 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:30:25.858079   18437 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:30:25.858082   18437 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:30:25.858085   18437 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:30:25.858088   18437 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:30:25.858091   18437 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:30:25.858094   18437 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:30:25.858097   18437 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:30:25.858102   18437 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:30:25.858105   18437 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:30:25.858109   18437 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:30:25.858112   18437 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:30:25.858115   18437 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:30:25.858118   18437 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:30:25.858126   18437 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:30:25.858129   18437 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:30:25.858134   18437 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:30:25.858137   18437 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:30:25.858139   18437 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:30:25.858142   18437 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:30:25.858145   18437 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:30:25.858147   18437 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:30:25.858150   18437 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:30:25.858153   18437 cri.go:89] found id: ""
	I1129 08:30:25.858189   18437 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:30:25.871955   18437 out.go:203] 
	W1129 08:30:25.873253   18437 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:30:25.873268   18437 out.go:285] * 
	* 
	W1129 08:30:25.876217   18437 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:30:25.877527   18437 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-053273 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.834926ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-gt598" [43f7eae2-b891-44d2-80dc-8650bee1c9d8] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002596958s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-zsxkb" [417acc75-98bb-45c2-a648-a0e941620c8e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003974133s
addons_test.go:392: (dbg) Run:  kubectl --context addons-053273 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-053273 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-053273 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.017943198s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 addons disable registry --alsologtostderr -v=1: exit status 11 (261.33614ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:30:49.016417   21292 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:30:49.016680   21292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:49.016688   21292 out.go:374] Setting ErrFile to fd 2...
	I1129 08:30:49.016693   21292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:49.016913   21292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:30:49.017159   21292 mustload.go:66] Loading cluster: addons-053273
	I1129 08:30:49.017468   21292 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:49.017490   21292 addons.go:622] checking whether the cluster is paused
	I1129 08:30:49.017592   21292 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:49.017611   21292 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:30:49.018030   21292 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:30:49.036295   21292 ssh_runner.go:195] Run: systemctl --version
	I1129 08:30:49.036369   21292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:30:49.054208   21292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:30:49.157067   21292 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:30:49.157157   21292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:30:49.190606   21292 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:30:49.190625   21292 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:30:49.190629   21292 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:30:49.190633   21292 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:30:49.190636   21292 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:30:49.190640   21292 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:30:49.190642   21292 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:30:49.190645   21292 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:30:49.190648   21292 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:30:49.190653   21292 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:30:49.190656   21292 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:30:49.190658   21292 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:30:49.190661   21292 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:30:49.190664   21292 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:30:49.190667   21292 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:30:49.190675   21292 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:30:49.190678   21292 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:30:49.190682   21292 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:30:49.190685   21292 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:30:49.190688   21292 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:30:49.190691   21292 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:30:49.190694   21292 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:30:49.190697   21292 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:30:49.190700   21292 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:30:49.190702   21292 cri.go:89] found id: ""
	I1129 08:30:49.190739   21292 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:30:49.208355   21292 out.go:203] 
	W1129 08:30:49.210396   21292 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:30:49.210416   21292 out.go:285] * 
	* 
	W1129 08:30:49.214750   21292 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:30:49.216508   21292 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-053273 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.50s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.919872ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-053273
addons_test.go:332: (dbg) Run:  kubectl --context addons-053273 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (257.657205ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:30:38.702244   20104 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:30:38.702630   20104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:38.702645   20104 out.go:374] Setting ErrFile to fd 2...
	I1129 08:30:38.702651   20104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:38.702944   20104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:30:38.703268   20104 mustload.go:66] Loading cluster: addons-053273
	I1129 08:30:38.703710   20104 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:38.703735   20104 addons.go:622] checking whether the cluster is paused
	I1129 08:30:38.703879   20104 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:38.703904   20104 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:30:38.704423   20104 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:30:38.723234   20104 ssh_runner.go:195] Run: systemctl --version
	I1129 08:30:38.723297   20104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:30:38.742437   20104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:30:38.843469   20104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:30:38.843564   20104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:30:38.873408   20104 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:30:38.873449   20104 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:30:38.873457   20104 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:30:38.873466   20104 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:30:38.873471   20104 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:30:38.873475   20104 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:30:38.873481   20104 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:30:38.873486   20104 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:30:38.873490   20104 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:30:38.873516   20104 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:30:38.873525   20104 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:30:38.873530   20104 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:30:38.873537   20104 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:30:38.873542   20104 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:30:38.873549   20104 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:30:38.873564   20104 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:30:38.873586   20104 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:30:38.873593   20104 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:30:38.873597   20104 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:30:38.873602   20104 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:30:38.873607   20104 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:30:38.873611   20104 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:30:38.873615   20104 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:30:38.873619   20104 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:30:38.873624   20104 cri.go:89] found id: ""
	I1129 08:30:38.873695   20104 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:30:38.888278   20104 out.go:203] 
	W1129 08:30:38.889490   20104 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:30:38.889505   20104 out.go:285] * 
	* 
	W1129 08:30:38.892388   20104 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:30:38.893699   20104 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-053273 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (147.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-053273 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-053273 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-053273 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [659f30f3-f651-4f47-8941-c7e89b0ae22d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [659f30f3-f651-4f47-8941-c7e89b0ae22d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003522226s
I1129 08:30:44.196758    9216 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.40289772s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-053273 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-053273
helpers_test.go:243: (dbg) docker inspect addons-053273:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4ce9f94b88a0aeb78da65dd6c720d82dc410e5569bcb7062eeccb22a94b4aae5",
	        "Created": "2025-11-29T08:28:44.78754074Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11213,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T08:28:44.820243244Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/4ce9f94b88a0aeb78da65dd6c720d82dc410e5569bcb7062eeccb22a94b4aae5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4ce9f94b88a0aeb78da65dd6c720d82dc410e5569bcb7062eeccb22a94b4aae5/hostname",
	        "HostsPath": "/var/lib/docker/containers/4ce9f94b88a0aeb78da65dd6c720d82dc410e5569bcb7062eeccb22a94b4aae5/hosts",
	        "LogPath": "/var/lib/docker/containers/4ce9f94b88a0aeb78da65dd6c720d82dc410e5569bcb7062eeccb22a94b4aae5/4ce9f94b88a0aeb78da65dd6c720d82dc410e5569bcb7062eeccb22a94b4aae5-json.log",
	        "Name": "/addons-053273",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-053273:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-053273",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4ce9f94b88a0aeb78da65dd6c720d82dc410e5569bcb7062eeccb22a94b4aae5",
	                "LowerDir": "/var/lib/docker/overlay2/a68c3799c04cab13dc2b78294ec1c9cd7d65d892fed21ffba750fe0af0f4bdd8-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a68c3799c04cab13dc2b78294ec1c9cd7d65d892fed21ffba750fe0af0f4bdd8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a68c3799c04cab13dc2b78294ec1c9cd7d65d892fed21ffba750fe0af0f4bdd8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a68c3799c04cab13dc2b78294ec1c9cd7d65d892fed21ffba750fe0af0f4bdd8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-053273",
	                "Source": "/var/lib/docker/volumes/addons-053273/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-053273",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-053273",
	                "name.minikube.sigs.k8s.io": "addons-053273",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "495f5523f5e4c5107e4584b29c8e0886ddf4ef4026b78f557eee47317a6b4154",
	            "SandboxKey": "/var/run/docker/netns/495f5523f5e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-053273": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "895729fa4ef5de98467555d848cd10702b6938d0e0fc7bd88070035594bde18f",
	                    "EndpointID": "fd9f81e8719230000fcfe8444b5fc505a482fdeff1031c273f079b10ab30b766",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "ba:b7:63:f1:0c:d2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-053273",
	                        "4ce9f94b88a0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-053273 -n addons-053273
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-053273 logs -n 25: (1.149150713s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-932462 --alsologtostderr --binary-mirror http://127.0.0.1:42911 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-932462 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	│ delete  │ -p binary-mirror-932462                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-932462 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:28 UTC │
	│ addons  │ enable dashboard -p addons-053273                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-053273                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	│ start   │ -p addons-053273 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:30 UTC │
	│ addons  │ addons-053273 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ addons  │ addons-053273 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ addons  │ enable headlamp -p addons-053273 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ addons  │ addons-053273 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-053273                                                                                                                                                                                                                                                                                                                                                                                           │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │ 29 Nov 25 08:30 UTC │
	│ addons  │ addons-053273 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ addons  │ addons-053273 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ addons  │ addons-053273 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ ssh     │ addons-053273 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ ssh     │ addons-053273 ssh cat /opt/local-path-provisioner/pvc-e4d98104-0771-4855-8667-9a2fb8670c8c_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │ 29 Nov 25 08:30 UTC │
	│ ip      │ addons-053273 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │ 29 Nov 25 08:30 UTC │
	│ addons  │ addons-053273 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ addons  │ addons-053273 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ addons  │ addons-053273 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ addons  │ addons-053273 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ addons  │ addons-053273 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ addons  │ addons-053273 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ addons  │ addons-053273 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-053273 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:31 UTC │                     │
	│ ip      │ addons-053273 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-053273        │ jenkins │ v1.37.0 │ 29 Nov 25 08:32 UTC │ 29 Nov 25 08:32 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 08:28:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 08:28:20.821509   10554 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:28:20.821713   10554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:20.821721   10554 out.go:374] Setting ErrFile to fd 2...
	I1129 08:28:20.821725   10554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:20.821915   10554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:28:20.822394   10554 out.go:368] Setting JSON to false
	I1129 08:28:20.823165   10554 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":653,"bootTime":1764404248,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:28:20.823219   10554 start.go:143] virtualization: kvm guest
	I1129 08:28:20.825097   10554 out.go:179] * [addons-053273] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 08:28:20.826477   10554 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 08:28:20.826460   10554 notify.go:221] Checking for updates...
	I1129 08:28:20.829459   10554 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:28:20.830774   10554 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 08:28:20.831923   10554 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 08:28:20.833340   10554 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 08:28:20.834707   10554 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 08:28:20.835889   10554 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:28:20.858935   10554 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 08:28:20.859069   10554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:28:20.916658   10554 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-11-29 08:28:20.90676456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:28:20.916772   10554 docker.go:319] overlay module found
	I1129 08:28:20.918743   10554 out.go:179] * Using the docker driver based on user configuration
	I1129 08:28:20.920059   10554 start.go:309] selected driver: docker
	I1129 08:28:20.920073   10554 start.go:927] validating driver "docker" against <nil>
	I1129 08:28:20.920084   10554 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 08:28:20.920667   10554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:28:20.979112   10554 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-11-29 08:28:20.969700064 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:28:20.979267   10554 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 08:28:20.979468   10554 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 08:28:20.981378   10554 out.go:179] * Using Docker driver with root privileges
	I1129 08:28:20.982704   10554 cni.go:84] Creating CNI manager for ""
	I1129 08:28:20.982759   10554 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 08:28:20.982768   10554 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 08:28:20.982860   10554 start.go:353] cluster config:
	{Name:addons-053273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1129 08:28:20.984045   10554 out.go:179] * Starting "addons-053273" primary control-plane node in "addons-053273" cluster
	I1129 08:28:20.985007   10554 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 08:28:20.986123   10554 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 08:28:20.987769   10554 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 08:28:20.987796   10554 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 08:28:20.987805   10554 cache.go:65] Caching tarball of preloaded images
	I1129 08:28:20.987861   10554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 08:28:20.987917   10554 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 08:28:20.987932   10554 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 08:28:20.988279   10554 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/config.json ...
	I1129 08:28:20.988307   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/config.json: {Name:mk28d0e1aea03b0eb123c81fc976b5dd98ac733e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:21.003731   10554 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1129 08:28:21.003890   10554 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1129 08:28:21.003916   10554 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1129 08:28:21.003922   10554 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1129 08:28:21.003929   10554 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1129 08:28:21.003935   10554 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1129 08:28:33.312079   10554 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1129 08:28:33.312119   10554 cache.go:243] Successfully downloaded all kic artifacts
	I1129 08:28:33.312171   10554 start.go:360] acquireMachinesLock for addons-053273: {Name:mkf4f6215d673a1a64758cf7cdbd392ebfc0d5ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 08:28:33.312278   10554 start.go:364] duration metric: took 85.285µs to acquireMachinesLock for "addons-053273"
	I1129 08:28:33.312318   10554 start.go:93] Provisioning new machine with config: &{Name:addons-053273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053273 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 08:28:33.312385   10554 start.go:125] createHost starting for "" (driver="docker")
	I1129 08:28:33.313967   10554 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1129 08:28:33.314191   10554 start.go:159] libmachine.API.Create for "addons-053273" (driver="docker")
	I1129 08:28:33.314226   10554 client.go:173] LocalClient.Create starting
	I1129 08:28:33.314369   10554 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem
	I1129 08:28:33.453160   10554 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem
	I1129 08:28:33.585683   10554 cli_runner.go:164] Run: docker network inspect addons-053273 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 08:28:33.602536   10554 cli_runner.go:211] docker network inspect addons-053273 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 08:28:33.602617   10554 network_create.go:284] running [docker network inspect addons-053273] to gather additional debugging logs...
	I1129 08:28:33.602635   10554 cli_runner.go:164] Run: docker network inspect addons-053273
	W1129 08:28:33.618286   10554 cli_runner.go:211] docker network inspect addons-053273 returned with exit code 1
	I1129 08:28:33.618312   10554 network_create.go:287] error running [docker network inspect addons-053273]: docker network inspect addons-053273: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-053273 not found
	I1129 08:28:33.618323   10554 network_create.go:289] output of [docker network inspect addons-053273]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-053273 not found
	
	** /stderr **
	I1129 08:28:33.618398   10554 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 08:28:33.635090   10554 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cfee50}
	I1129 08:28:33.635130   10554 network_create.go:124] attempt to create docker network addons-053273 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1129 08:28:33.635191   10554 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-053273 addons-053273
	I1129 08:28:33.679674   10554 network_create.go:108] docker network addons-053273 192.168.49.0/24 created
	I1129 08:28:33.679704   10554 kic.go:121] calculated static IP "192.168.49.2" for the "addons-053273" container
	I1129 08:28:33.679770   10554 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 08:28:33.695770   10554 cli_runner.go:164] Run: docker volume create addons-053273 --label name.minikube.sigs.k8s.io=addons-053273 --label created_by.minikube.sigs.k8s.io=true
	I1129 08:28:33.713722   10554 oci.go:103] Successfully created a docker volume addons-053273
	I1129 08:28:33.713803   10554 cli_runner.go:164] Run: docker run --rm --name addons-053273-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-053273 --entrypoint /usr/bin/test -v addons-053273:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 08:28:40.364715   10554 cli_runner.go:217] Completed: docker run --rm --name addons-053273-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-053273 --entrypoint /usr/bin/test -v addons-053273:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (6.650866865s)
	I1129 08:28:40.364743   10554 oci.go:107] Successfully prepared a docker volume addons-053273
	I1129 08:28:40.364797   10554 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 08:28:40.364808   10554 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 08:28:40.364882   10554 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-053273:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1129 08:28:44.715871   10554 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-053273:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.350931606s)
	I1129 08:28:44.715905   10554 kic.go:203] duration metric: took 4.35109227s to extract preloaded images to volume ...
	W1129 08:28:44.716022   10554 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 08:28:44.716077   10554 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 08:28:44.716118   10554 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 08:28:44.771751   10554 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-053273 --name addons-053273 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-053273 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-053273 --network addons-053273 --ip 192.168.49.2 --volume addons-053273:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 08:28:45.060386   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Running}}
	I1129 08:28:45.079989   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:28:45.097634   10554 cli_runner.go:164] Run: docker exec addons-053273 stat /var/lib/dpkg/alternatives/iptables
	I1129 08:28:45.140652   10554 oci.go:144] the created container "addons-053273" has a running status.
	I1129 08:28:45.140682   10554 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa...
	I1129 08:28:45.243853   10554 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 08:28:45.267041   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:28:45.285800   10554 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 08:28:45.285820   10554 kic_runner.go:114] Args: [docker exec --privileged addons-053273 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 08:28:45.331899   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:28:45.352996   10554 machine.go:94] provisionDockerMachine start ...
	I1129 08:28:45.353123   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:45.378611   10554 main.go:143] libmachine: Using SSH client type: native
	I1129 08:28:45.378969   10554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1129 08:28:45.378989   10554 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 08:28:45.380475   10554 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34070->127.0.0.1:32768: read: connection reset by peer
	I1129 08:28:48.522703   10554 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-053273
	
	I1129 08:28:48.522733   10554 ubuntu.go:182] provisioning hostname "addons-053273"
	I1129 08:28:48.522788   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:48.540905   10554 main.go:143] libmachine: Using SSH client type: native
	I1129 08:28:48.541139   10554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1129 08:28:48.541151   10554 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-053273 && echo "addons-053273" | sudo tee /etc/hostname
	I1129 08:28:48.691085   10554 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-053273
	
	I1129 08:28:48.691156   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:48.708799   10554 main.go:143] libmachine: Using SSH client type: native
	I1129 08:28:48.709131   10554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1129 08:28:48.709161   10554 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-053273' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-053273/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-053273' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 08:28:48.851437   10554 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 08:28:48.851464   10554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 08:28:48.851501   10554 ubuntu.go:190] setting up certificates
	I1129 08:28:48.851512   10554 provision.go:84] configureAuth start
	I1129 08:28:48.851562   10554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-053273
	I1129 08:28:48.868620   10554 provision.go:143] copyHostCerts
	I1129 08:28:48.868704   10554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 08:28:48.868891   10554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 08:28:48.868984   10554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 08:28:48.869137   10554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.addons-053273 san=[127.0.0.1 192.168.49.2 addons-053273 localhost minikube]
	I1129 08:28:48.913132   10554 provision.go:177] copyRemoteCerts
	I1129 08:28:48.913185   10554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 08:28:48.913218   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:48.930309   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:28:49.030912   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1129 08:28:49.049850   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 08:28:49.066611   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 08:28:49.082965   10554 provision.go:87] duration metric: took 231.433651ms to configureAuth
	I1129 08:28:49.082998   10554 ubuntu.go:206] setting minikube options for container-runtime
	I1129 08:28:49.083158   10554 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:28:49.083260   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:49.103088   10554 main.go:143] libmachine: Using SSH client type: native
	I1129 08:28:49.103334   10554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1129 08:28:49.103351   10554 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 08:28:49.381146   10554 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 08:28:49.381173   10554 machine.go:97] duration metric: took 4.028143818s to provisionDockerMachine
	I1129 08:28:49.381184   10554 client.go:176] duration metric: took 16.066947847s to LocalClient.Create
	I1129 08:28:49.381202   10554 start.go:167] duration metric: took 16.067010557s to libmachine.API.Create "addons-053273"
	I1129 08:28:49.381212   10554 start.go:293] postStartSetup for "addons-053273" (driver="docker")
	I1129 08:28:49.381225   10554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 08:28:49.381287   10554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 08:28:49.381335   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:49.399032   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:28:49.500765   10554 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 08:28:49.504289   10554 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 08:28:49.504322   10554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 08:28:49.504336   10554 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 08:28:49.504402   10554 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 08:28:49.504427   10554 start.go:296] duration metric: took 123.209021ms for postStartSetup
	I1129 08:28:49.504706   10554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-053273
	I1129 08:28:49.522259   10554 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/config.json ...
	I1129 08:28:49.522531   10554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:28:49.522575   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:49.539766   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:28:49.636768   10554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 08:28:49.641061   10554 start.go:128] duration metric: took 16.328660651s to createHost
	I1129 08:28:49.641087   10554 start.go:83] releasing machines lock for "addons-053273", held for 16.328784053s
	I1129 08:28:49.641149   10554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-053273
	I1129 08:28:49.658233   10554 ssh_runner.go:195] Run: cat /version.json
	I1129 08:28:49.658277   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:49.658319   10554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 08:28:49.658406   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:49.675498   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:28:49.675862   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:28:49.826148   10554 ssh_runner.go:195] Run: systemctl --version
	I1129 08:28:49.832959   10554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 08:28:49.866790   10554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 08:28:49.871459   10554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 08:28:49.871523   10554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 08:28:49.896917   10554 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 08:28:49.896944   10554 start.go:496] detecting cgroup driver to use...
	I1129 08:28:49.896978   10554 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 08:28:49.897013   10554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 08:28:49.912036   10554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 08:28:49.924055   10554 docker.go:218] disabling cri-docker service (if available) ...
	I1129 08:28:49.924110   10554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 08:28:49.939596   10554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 08:28:49.956279   10554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 08:28:50.035928   10554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 08:28:50.123054   10554 docker.go:234] disabling docker service ...
	I1129 08:28:50.123119   10554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 08:28:50.140502   10554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 08:28:50.152509   10554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 08:28:50.234277   10554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 08:28:50.314764   10554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 08:28:50.326516   10554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 08:28:50.339692   10554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 08:28:50.339759   10554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:28:50.349400   10554 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 08:28:50.349465   10554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:28:50.358001   10554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:28:50.366061   10554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:28:50.374031   10554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 08:28:50.381767   10554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:28:50.389873   10554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:28:50.402539   10554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:28:50.411589   10554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 08:28:50.418610   10554 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1129 08:28:50.418654   10554 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1129 08:28:50.430731   10554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 08:28:50.437994   10554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 08:28:50.511695   10554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 08:28:50.642132   10554 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 08:28:50.642217   10554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 08:28:50.646143   10554 start.go:564] Will wait 60s for crictl version
	I1129 08:28:50.646202   10554 ssh_runner.go:195] Run: which crictl
	I1129 08:28:50.649640   10554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 08:28:50.674924   10554 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 08:28:50.675040   10554 ssh_runner.go:195] Run: crio --version
	I1129 08:28:50.702427   10554 ssh_runner.go:195] Run: crio --version
	I1129 08:28:50.732565   10554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 08:28:50.733691   10554 cli_runner.go:164] Run: docker network inspect addons-053273 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 08:28:50.750739   10554 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1129 08:28:50.754893   10554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 08:28:50.765068   10554 kubeadm.go:884] updating cluster {Name:addons-053273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 08:28:50.765184   10554 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 08:28:50.765229   10554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 08:28:50.794304   10554 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 08:28:50.794325   10554 crio.go:433] Images already preloaded, skipping extraction
	I1129 08:28:50.794371   10554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 08:28:50.818295   10554 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 08:28:50.818316   10554 cache_images.go:86] Images are preloaded, skipping loading
	I1129 08:28:50.818324   10554 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1129 08:28:50.818409   10554 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-053273 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-053273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 08:28:50.818491   10554 ssh_runner.go:195] Run: crio config
	I1129 08:28:50.862704   10554 cni.go:84] Creating CNI manager for ""
	I1129 08:28:50.862728   10554 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 08:28:50.862747   10554 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 08:28:50.862775   10554 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-053273 NodeName:addons-053273 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 08:28:50.862934   10554 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-053273"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 08:28:50.863005   10554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 08:28:50.870833   10554 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 08:28:50.870894   10554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 08:28:50.878633   10554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1129 08:28:50.891014   10554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 08:28:50.904964   10554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1129 08:28:50.917521   10554 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1129 08:28:50.921176   10554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 08:28:50.930984   10554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 08:28:51.004324   10554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 08:28:51.027097   10554 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273 for IP: 192.168.49.2
	I1129 08:28:51.027128   10554 certs.go:195] generating shared ca certs ...
	I1129 08:28:51.027144   10554 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.027287   10554 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 08:28:51.059569   10554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt ...
	I1129 08:28:51.059600   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt: {Name:mkcd2e4cfe3c1a0a3009971ae94ce4a87857db91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.059803   10554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key ...
	I1129 08:28:51.059819   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key: {Name:mk5039828d29547a7908ecefaca5b82cb351479e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.059952   10554 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 08:28:51.119838   10554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt ...
	I1129 08:28:51.119877   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt: {Name:mk012dbed843d7e9f088d181608049b8a4fc2e95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.120053   10554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key ...
	I1129 08:28:51.120065   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key: {Name:mk7c0df1c05dee6415ee4f2bea55b60104c150ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.120138   10554 certs.go:257] generating profile certs ...
	I1129 08:28:51.120190   10554 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.key
	I1129 08:28:51.120204   10554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt with IP's: []
	I1129 08:28:51.195792   10554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt ...
	I1129 08:28:51.195820   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: {Name:mk167101f2910b47f332157b6a5bd07cf45e6250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.195988   10554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.key ...
	I1129 08:28:51.195998   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.key: {Name:mkfb96b93aa2047d803c91a450fe6fb8ef3d646f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.196066   10554 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.key.c57785be
	I1129 08:28:51.196084   10554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.crt.c57785be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1129 08:28:51.308591   10554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.crt.c57785be ...
	I1129 08:28:51.308618   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.crt.c57785be: {Name:mkf68d9053a57ec2aaf85d62bf1fbb6d37a55220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.308769   10554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.key.c57785be ...
	I1129 08:28:51.308781   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.key.c57785be: {Name:mkfddb4836ad4620b1ec6797f886f8748b21e6dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.308859   10554 certs.go:382] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.crt.c57785be -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.crt
	I1129 08:28:51.308938   10554 certs.go:386] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.key.c57785be -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.key
	I1129 08:28:51.308985   10554 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.key
	I1129 08:28:51.309003   10554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.crt with IP's: []
	I1129 08:28:51.425058   10554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.crt ...
	I1129 08:28:51.425085   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.crt: {Name:mk941f9023d64e26d4e91ab2cb3799246cb4277e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.425284   10554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.key ...
	I1129 08:28:51.425300   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.key: {Name:mkd49676b219b87226fd39b085adc976a585b9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.425483   10554 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 08:28:51.425522   10554 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 08:28:51.425554   10554 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 08:28:51.425577   10554 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 08:28:51.426160   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 08:28:51.443317   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 08:28:51.459404   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 08:28:51.475477   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 08:28:51.491086   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1129 08:28:51.506554   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 08:28:51.522547   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 08:28:51.538393   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 08:28:51.554301   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 08:28:51.572239   10554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 08:28:51.583645   10554 ssh_runner.go:195] Run: openssl version
	I1129 08:28:51.589326   10554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 08:28:51.599116   10554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 08:28:51.602465   10554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 08:28:51.602512   10554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 08:28:51.635397   10554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 08:28:51.643818   10554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 08:28:51.647303   10554 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 08:28:51.647353   10554 kubeadm.go:401] StartCluster: {Name:addons-053273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:28:51.647434   10554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:28:51.647499   10554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:28:51.672971   10554 cri.go:89] found id: ""
	I1129 08:28:51.673027   10554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 08:28:51.680585   10554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 08:28:51.687988   10554 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 08:28:51.688039   10554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 08:28:51.695255   10554 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 08:28:51.695270   10554 kubeadm.go:158] found existing configuration files:
	
	I1129 08:28:51.695311   10554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 08:28:51.702322   10554 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 08:28:51.702370   10554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 08:28:51.709181   10554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 08:28:51.716224   10554 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 08:28:51.716275   10554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 08:28:51.723000   10554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 08:28:51.729897   10554 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 08:28:51.729946   10554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 08:28:51.736823   10554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 08:28:51.743934   10554 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 08:28:51.744006   10554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 08:28:51.751038   10554 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 08:28:51.787714   10554 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 08:28:51.787784   10554 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 08:28:51.808465   10554 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 08:28:51.808567   10554 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 08:28:51.808602   10554 kubeadm.go:319] OS: Linux
	I1129 08:28:51.808663   10554 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 08:28:51.808743   10554 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 08:28:51.808857   10554 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 08:28:51.808934   10554 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 08:28:51.808991   10554 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 08:28:51.809051   10554 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 08:28:51.809126   10554 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 08:28:51.809198   10554 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 08:28:51.862315   10554 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 08:28:51.862443   10554 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 08:28:51.862584   10554 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 08:28:51.869258   10554 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 08:28:51.871024   10554 out.go:252]   - Generating certificates and keys ...
	I1129 08:28:51.871110   10554 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 08:28:51.871169   10554 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 08:28:51.962439   10554 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 08:28:52.130390   10554 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 08:28:52.469971   10554 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 08:28:52.911037   10554 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 08:28:52.992053   10554 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 08:28:52.992197   10554 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-053273 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1129 08:28:53.196049   10554 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 08:28:53.196226   10554 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-053273 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1129 08:28:53.632437   10554 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 08:28:53.780766   10554 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 08:28:53.815258   10554 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 08:28:53.815318   10554 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 08:28:54.056010   10554 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 08:28:54.175491   10554 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 08:28:54.341324   10554 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 08:28:54.463568   10554 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 08:28:54.537104   10554 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 08:28:54.537504   10554 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 08:28:54.541329   10554 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 08:28:54.544925   10554 out.go:252]   - Booting up control plane ...
	I1129 08:28:54.545048   10554 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 08:28:54.545189   10554 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 08:28:54.545295   10554 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 08:28:54.557337   10554 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 08:28:54.557430   10554 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 08:28:54.564900   10554 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 08:28:54.565157   10554 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 08:28:54.565226   10554 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 08:28:54.664795   10554 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 08:28:54.664964   10554 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 08:28:55.166448   10554 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.790312ms
	I1129 08:28:55.169317   10554 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 08:28:55.169453   10554 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1129 08:28:55.169577   10554 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 08:28:55.169683   10554 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 08:28:56.173822   10554 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004405463s
	I1129 08:28:57.200575   10554 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.031242518s
	I1129 08:28:58.671238   10554 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501826625s
	I1129 08:28:58.680685   10554 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 08:28:58.689098   10554 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 08:28:58.696181   10554 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 08:28:58.696436   10554 kubeadm.go:319] [mark-control-plane] Marking the node addons-053273 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 08:28:58.703903   10554 kubeadm.go:319] [bootstrap-token] Using token: 4ugug3.583w0frhsqgeg0aj
	I1129 08:28:58.705341   10554 out.go:252]   - Configuring RBAC rules ...
	I1129 08:28:58.705467   10554 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 08:28:58.708088   10554 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 08:28:58.712542   10554 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 08:28:58.714657   10554 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 08:28:58.716716   10554 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 08:28:58.719491   10554 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 08:28:59.077239   10554 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 08:28:59.494442   10554 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 08:29:00.076815   10554 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 08:29:00.077597   10554 kubeadm.go:319] 
	I1129 08:29:00.077684   10554 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 08:29:00.077694   10554 kubeadm.go:319] 
	I1129 08:29:00.077778   10554 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 08:29:00.077788   10554 kubeadm.go:319] 
	I1129 08:29:00.077822   10554 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 08:29:00.077944   10554 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 08:29:00.078030   10554 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 08:29:00.078045   10554 kubeadm.go:319] 
	I1129 08:29:00.078109   10554 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 08:29:00.078116   10554 kubeadm.go:319] 
	I1129 08:29:00.078153   10554 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 08:29:00.078160   10554 kubeadm.go:319] 
	I1129 08:29:00.078201   10554 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 08:29:00.078265   10554 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 08:29:00.078327   10554 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 08:29:00.078333   10554 kubeadm.go:319] 
	I1129 08:29:00.078425   10554 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 08:29:00.078545   10554 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 08:29:00.078553   10554 kubeadm.go:319] 
	I1129 08:29:00.078622   10554 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4ugug3.583w0frhsqgeg0aj \
	I1129 08:29:00.078729   10554 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 \
	I1129 08:29:00.078752   10554 kubeadm.go:319] 	--control-plane 
	I1129 08:29:00.078758   10554 kubeadm.go:319] 
	I1129 08:29:00.078917   10554 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 08:29:00.078928   10554 kubeadm.go:319] 
	I1129 08:29:00.079038   10554 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4ugug3.583w0frhsqgeg0aj \
	I1129 08:29:00.079182   10554 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 
	I1129 08:29:00.080830   10554 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 08:29:00.081013   10554 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 08:29:00.081048   10554 cni.go:84] Creating CNI manager for ""
	I1129 08:29:00.081060   10554 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 08:29:00.083488   10554 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 08:29:00.084715   10554 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 08:29:00.088721   10554 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 08:29:00.088743   10554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 08:29:00.101740   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 08:29:00.296361   10554 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 08:29:00.296453   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:00.296482   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-053273 minikube.k8s.io/updated_at=2025_11_29T08_29_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=addons-053273 minikube.k8s.io/primary=true
	I1129 08:29:00.305481   10554 ops.go:34] apiserver oom_adj: -16
	I1129 08:29:00.369815   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:00.870690   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:01.369985   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:01.870925   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:02.370453   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:02.870935   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:03.370376   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:03.870021   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:04.370789   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:04.869961   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:05.370495   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:05.437850   10554 kubeadm.go:1114] duration metric: took 5.141457683s to wait for elevateKubeSystemPrivileges
	I1129 08:29:05.437892   10554 kubeadm.go:403] duration metric: took 13.790541822s to StartCluster
	I1129 08:29:05.437911   10554 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:05.438031   10554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 08:29:05.438493   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:05.438709   10554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 08:29:05.438745   10554 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 08:29:05.438801   10554 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1129 08:29:05.438955   10554 addons.go:70] Setting ingress-dns=true in profile "addons-053273"
	I1129 08:29:05.438969   10554 addons.go:70] Setting metrics-server=true in profile "addons-053273"
	I1129 08:29:05.438976   10554 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:29:05.438991   10554 addons.go:70] Setting gcp-auth=true in profile "addons-053273"
	I1129 08:29:05.438993   10554 addons.go:70] Setting storage-provisioner=true in profile "addons-053273"
	I1129 08:29:05.439003   10554 addons.go:70] Setting volumesnapshots=true in profile "addons-053273"
	I1129 08:29:05.439009   10554 mustload.go:66] Loading cluster: addons-053273
	I1129 08:29:05.439014   10554 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-053273"
	I1129 08:29:05.438980   10554 addons.go:239] Setting addon ingress-dns=true in "addons-053273"
	I1129 08:29:05.439027   10554 addons.go:70] Setting registry-creds=true in profile "addons-053273"
	I1129 08:29:05.439017   10554 addons.go:70] Setting registry=true in profile "addons-053273"
	I1129 08:29:05.439018   10554 addons.go:239] Setting addon volumesnapshots=true in "addons-053273"
	I1129 08:29:05.439044   10554 addons.go:239] Setting addon registry-creds=true in "addons-053273"
	I1129 08:29:05.439060   10554 addons.go:239] Setting addon registry=true in "addons-053273"
	I1129 08:29:05.439068   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.439074   10554 addons.go:70] Setting ingress=true in profile "addons-053273"
	I1129 08:29:05.439080   10554 addons.go:70] Setting cloud-spanner=true in profile "addons-053273"
	I1129 08:29:05.439089   10554 addons.go:239] Setting addon ingress=true in "addons-053273"
	I1129 08:29:05.439094   10554 addons.go:239] Setting addon cloud-spanner=true in "addons-053273"
	I1129 08:29:05.439107   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.439110   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.439117   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.439123   10554 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-053273"
	I1129 08:29:05.439145   10554 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-053273"
	I1129 08:29:05.439173   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.439234   10554 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:29:05.439494   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439620   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439639   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439640   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439649   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439665   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439039   10554 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-053273"
	I1129 08:29:05.439891   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.440334   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.438986   10554 addons.go:70] Setting default-storageclass=true in profile "addons-053273"
	I1129 08:29:05.441059   10554 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-053273"
	I1129 08:29:05.441145   10554 out.go:179] * Verifying Kubernetes components...
	I1129 08:29:05.441356   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.438964   10554 addons.go:70] Setting inspektor-gadget=true in profile "addons-053273"
	I1129 08:29:05.441446   10554 addons.go:239] Setting addon inspektor-gadget=true in "addons-053273"
	I1129 08:29:05.441473   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.438993   10554 addons.go:70] Setting volcano=true in profile "addons-053273"
	I1129 08:29:05.441826   10554 addons.go:239] Setting addon volcano=true in "addons-053273"
	I1129 08:29:05.441865   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.438982   10554 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-053273"
	I1129 08:29:05.441927   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439075   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.442305   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.442363   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439063   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.442412   10554 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-053273"
	I1129 08:29:05.442453   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.438980   10554 addons.go:239] Setting addon metrics-server=true in "addons-053273"
	I1129 08:29:05.442735   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.438985   10554 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-053273"
	I1129 08:29:05.443900   10554 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-053273"
	I1129 08:29:05.439020   10554 addons.go:239] Setting addon storage-provisioner=true in "addons-053273"
	I1129 08:29:05.444139   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.438956   10554 addons.go:70] Setting yakd=true in profile "addons-053273"
	I1129 08:29:05.444989   10554 addons.go:239] Setting addon yakd=true in "addons-053273"
	I1129 08:29:05.445020   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.445828   10554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 08:29:05.451825   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.452065   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.452652   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.454082   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.451830   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.456426   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.503977   10554 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1129 08:29:05.505759   10554 addons.go:239] Setting addon default-storageclass=true in "addons-053273"
	I1129 08:29:05.505895   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.505798   10554 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1129 08:29:05.505956   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1129 08:29:05.506018   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.506271   10554 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1129 08:29:05.506519   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.508257   10554 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1129 08:29:05.509311   10554 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1129 08:29:05.510408   10554 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1129 08:29:05.510427   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1129 08:29:05.510479   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.511920   10554 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1129 08:29:05.512537   10554 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1129 08:29:05.513418   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.514571   10554 out.go:179]   - Using image docker.io/registry:3.0.0
	I1129 08:29:05.515597   10554 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1129 08:29:05.515611   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1129 08:29:05.515658   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.516103   10554 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1129 08:29:05.516116   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1129 08:29:05.516171   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.523811   10554 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1129 08:29:05.523812   10554 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 08:29:05.525828   10554 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1129 08:29:05.526825   10554 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 08:29:05.526862   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 08:29:05.526920   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.527327   10554 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1129 08:29:05.527990   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1129 08:29:05.527345   10554 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1129 08:29:05.528139   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.528091   10554 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1129 08:29:05.529540   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1129 08:29:05.530122   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.533473   10554 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1129 08:29:05.533560   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1129 08:29:05.533643   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.541458   10554 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-053273"
	I1129 08:29:05.541507   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.544114   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.557762   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1129 08:29:05.557762   10554 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	W1129 08:29:05.558424   10554 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1129 08:29:05.559884   10554 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1129 08:29:05.559902   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1129 08:29:05.560009   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.560322   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1129 08:29:05.561471   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1129 08:29:05.562473   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1129 08:29:05.563528   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1129 08:29:05.564593   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1129 08:29:05.564608   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1129 08:29:05.566110   10554 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1129 08:29:05.566131   10554 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1129 08:29:05.566205   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.567417   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1129 08:29:05.568467   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1129 08:29:05.568707   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.569467   10554 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1129 08:29:05.569487   10554 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1129 08:29:05.569565   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.570616   10554 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1129 08:29:05.571587   10554 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1129 08:29:05.571613   10554 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1129 08:29:05.571671   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.573934   10554 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1129 08:29:05.575011   10554 out.go:179]   - Using image docker.io/busybox:stable
	I1129 08:29:05.577788   10554 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1129 08:29:05.577878   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1129 08:29:05.578023   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.581404   10554 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1129 08:29:05.582728   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.583602   10554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 08:29:05.585737   10554 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1129 08:29:05.588073   10554 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1129 08:29:05.588151   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.589943   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.594978   10554 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 08:29:05.595003   10554 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 08:29:05.595094   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.602810   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.614889   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.615821   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.616835   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.622412   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.629753   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.633870   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.635325   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.635406   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	W1129 08:29:05.644679   10554 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1129 08:29:05.644739   10554 retry.go:31] will retry after 350.202563ms: ssh: handshake failed: EOF
	I1129 08:29:05.649787   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.658938   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.661783   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.663821   10554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 08:29:05.746438   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1129 08:29:05.759672   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1129 08:29:05.769985   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1129 08:29:05.774242   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 08:29:05.791653   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1129 08:29:05.794680   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1129 08:29:05.810061   10554 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1129 08:29:05.810087   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1129 08:29:05.814473   10554 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1129 08:29:05.814495   10554 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1129 08:29:05.817964   10554 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1129 08:29:05.817984   10554 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1129 08:29:05.820754   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1129 08:29:05.826647   10554 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1129 08:29:05.826676   10554 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1129 08:29:05.831091   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1129 08:29:05.834280   10554 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1129 08:29:05.834306   10554 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1129 08:29:05.836739   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 08:29:05.849129   10554 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1129 08:29:05.849156   10554 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1129 08:29:05.870783   10554 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1129 08:29:05.870815   10554 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1129 08:29:05.878502   10554 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1129 08:29:05.878533   10554 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1129 08:29:05.897317   10554 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1129 08:29:05.897351   10554 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1129 08:29:05.897367   10554 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1129 08:29:05.897381   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1129 08:29:05.897317   10554 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1129 08:29:05.897443   10554 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1129 08:29:05.912654   10554 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1129 08:29:05.912686   10554 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1129 08:29:05.943881   10554 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1129 08:29:05.943907   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1129 08:29:05.949385   10554 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 08:29:05.949418   10554 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1129 08:29:05.953139   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1129 08:29:05.959008   10554 node_ready.go:35] waiting up to 6m0s for node "addons-053273" to be "Ready" ...
	I1129 08:29:05.959287   10554 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1129 08:29:05.972196   10554 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1129 08:29:05.972219   10554 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1129 08:29:05.977651   10554 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1129 08:29:05.977674   10554 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1129 08:29:05.987915   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1129 08:29:06.000555   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 08:29:06.041722   10554 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1129 08:29:06.041751   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1129 08:29:06.051406   10554 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1129 08:29:06.051435   10554 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1129 08:29:06.118312   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1129 08:29:06.139356   10554 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1129 08:29:06.139458   10554 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1129 08:29:06.185145   10554 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1129 08:29:06.185174   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1129 08:29:06.227236   10554 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1129 08:29:06.227264   10554 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1129 08:29:06.284699   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1129 08:29:06.296447   10554 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1129 08:29:06.296473   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1129 08:29:06.354475   10554 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1129 08:29:06.354497   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1129 08:29:06.406806   10554 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1129 08:29:06.406830   10554 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1129 08:29:06.442173   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1129 08:29:06.469381   10554 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-053273" context rescaled to 1 replicas
	I1129 08:29:06.943457   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.196981726s)
	I1129 08:29:06.943507   10554 addons.go:495] Verifying addon ingress=true in "addons-053273"
	I1129 08:29:06.943531   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.183831201s)
	I1129 08:29:06.943651   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.169384355s)
	I1129 08:29:06.943603   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.173596495s)
	I1129 08:29:06.943747   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.152066832s)
	I1129 08:29:06.943879   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.149165517s)
	I1129 08:29:06.943929   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.123143081s)
	I1129 08:29:06.943984   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.112867068s)
	I1129 08:29:06.944018   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.107218724s)
	I1129 08:29:06.944095   10554 addons.go:495] Verifying addon registry=true in "addons-053273"
	I1129 08:29:06.944242   10554 addons.go:495] Verifying addon metrics-server=true in "addons-053273"
	I1129 08:29:06.948742   10554 out.go:179] * Verifying registry addon...
	I1129 08:29:06.948801   10554 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-053273 service yakd-dashboard -n yakd-dashboard
	
	I1129 08:29:06.948748   10554 out.go:179] * Verifying ingress addon...
	W1129 08:29:06.951109   10554 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1129 08:29:06.951206   10554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1129 08:29:06.951208   10554 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1129 08:29:06.953623   10554 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1129 08:29:06.953640   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:06.954641   10554 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1129 08:29:07.453110   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.334680035s)
	W1129 08:29:07.453181   10554 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1129 08:29:07.453207   10554 retry.go:31] will retry after 334.67341ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1129 08:29:07.453259   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.168472576s)
	I1129 08:29:07.453486   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.011223981s)
	I1129 08:29:07.453507   10554 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-053273"
	I1129 08:29:07.457984   10554 out.go:179] * Verifying csi-hostpath-driver addon...
	I1129 08:29:07.460427   10554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1129 08:29:07.463441   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:07.464255   10554 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1129 08:29:07.464276   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:07.464541   10554 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1129 08:29:07.464563   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:07.788757   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1129 08:29:07.954138   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:07.954254   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:07.961425   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:07.963289   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:08.454957   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:08.455145   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:08.463202   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:08.954136   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:08.954304   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:08.962764   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:09.454149   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:09.454353   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:09.462920   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:09.954027   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:09.954192   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:09.962718   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:10.256535   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.467738069s)
	I1129 08:29:10.454312   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:10.454521   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:10.461455   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:10.462420   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:10.954610   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:10.954920   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:10.964632   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:11.455101   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:11.455240   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:11.462668   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:11.954282   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:11.954501   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:11.962440   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:12.454706   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:12.454789   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:12.461701   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:12.462783   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:12.954962   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:12.955181   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:12.963078   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:13.119765   10554 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1129 08:29:13.119832   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:13.137589   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:13.244319   10554 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1129 08:29:13.257023   10554 addons.go:239] Setting addon gcp-auth=true in "addons-053273"
	I1129 08:29:13.257077   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:13.257404   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:13.274724   10554 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1129 08:29:13.274769   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:13.292998   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:13.392686   10554 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1129 08:29:13.393860   10554 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1129 08:29:13.394923   10554 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1129 08:29:13.394941   10554 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1129 08:29:13.408336   10554 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1129 08:29:13.408364   10554 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1129 08:29:13.420912   10554 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1129 08:29:13.420936   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1129 08:29:13.434434   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1129 08:29:13.454465   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:13.454540   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:13.462516   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:13.730905   10554 addons.go:495] Verifying addon gcp-auth=true in "addons-053273"
	I1129 08:29:13.732188   10554 out.go:179] * Verifying gcp-auth addon...
	I1129 08:29:13.734155   10554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1129 08:29:13.736508   10554 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1129 08:29:13.736524   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:13.954210   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:13.954476   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:13.962462   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:14.237153   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:14.454075   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:14.454123   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:14.461952   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:14.462896   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:14.737861   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:14.954569   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:14.954749   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:14.962760   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:15.237416   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:15.454011   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:15.454023   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:15.463073   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:15.736882   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:15.954644   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:15.954786   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:15.962704   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:16.237420   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:16.454585   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:16.454641   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:16.462657   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:16.737460   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:16.954081   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:16.954126   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:16.962416   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:16.963233   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:17.237695   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:17.454487   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:17.454583   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:17.462565   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:17.737797   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:17.954529   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:17.954593   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:17.962426   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:18.237077   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:18.455212   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:18.455288   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:18.462646   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:18.737693   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:18.954367   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:18.954561   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:18.962386   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:19.237308   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:19.453719   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:19.453917   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:19.461694   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:19.462733   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:19.737697   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:19.954516   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:19.954650   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:19.962462   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:20.237521   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:20.454569   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:20.454647   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:20.462834   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:20.737578   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:20.954496   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:20.954702   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:20.962586   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:21.237350   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:21.453937   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:21.454084   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:21.462011   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:21.463010   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:21.737626   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:21.954485   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:21.954599   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:21.962493   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:22.237334   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:22.454351   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:22.454364   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:22.462544   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:22.737549   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:22.954399   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:22.954418   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:22.963196   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:23.236616   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:23.454305   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:23.454310   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:23.462970   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:23.737757   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:23.954582   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:23.954634   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:23.961651   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:23.962767   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:24.237567   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:24.454535   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:24.454635   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:24.462787   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:24.737725   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:24.954481   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:24.954571   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:24.963587   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:25.236711   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:25.454445   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:25.454556   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:25.462601   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:25.737459   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:25.954397   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:25.954628   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:25.962729   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:26.237631   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:26.454856   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:26.454871   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:26.461667   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:26.462620   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:26.737327   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:26.953873   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:26.953997   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:26.962827   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:27.237556   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:27.454353   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:27.454404   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:27.462598   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:27.737563   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:27.954194   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:27.954242   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:27.962571   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:28.237303   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:28.454006   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:28.454024   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:28.461946   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:28.462984   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:28.737957   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:28.954874   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:28.954911   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:28.962754   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:29.237445   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:29.454026   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:29.454164   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:29.462289   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:29.737240   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:29.953679   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:29.953735   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:29.962621   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:30.237606   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:30.454615   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:30.454834   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:30.462686   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:30.737221   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:30.953914   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:30.954077   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:30.961857   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:30.962788   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:31.237444   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:31.454277   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:31.454284   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:31.462326   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:31.736705   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:31.954375   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:31.954525   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:31.962466   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:32.237298   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:32.454036   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:32.454112   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:32.462931   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:32.737619   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:32.954355   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:32.954407   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:32.962452   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:33.237145   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:33.454821   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:33.454926   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:33.461901   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:33.462835   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:33.737526   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:33.954282   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:33.954382   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:33.962736   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:34.237193   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:34.453975   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:34.454053   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:34.462930   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:34.737551   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:34.954339   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:34.954431   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:34.962291   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:35.237090   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:35.454635   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:35.454757   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:35.462827   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:35.737748   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:35.954435   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:35.954600   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:35.961422   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:35.962458   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:36.237081   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:36.454478   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:36.454577   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:36.462761   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:36.737240   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:36.954058   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:36.954162   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:36.963208   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:37.237017   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:37.454539   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:37.454593   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:37.462338   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:37.736977   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:37.954874   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:37.954923   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:37.961792   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:37.962756   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:38.237445   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:38.454204   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:38.454251   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:38.463039   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:38.736967   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:38.954343   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:38.954577   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:38.962512   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:39.237038   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:39.455029   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:39.455111   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:39.462790   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:39.737724   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:39.954374   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:39.954403   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:39.962987   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:40.237600   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:40.454356   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:40.454536   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:40.461669   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:40.462560   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:40.737449   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:40.954375   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:40.954449   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:40.962241   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:41.237064   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:41.454965   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:41.454965   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:41.462691   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:41.737253   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:41.953853   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:41.953960   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:41.962936   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:42.237456   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:42.454258   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:42.454435   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:42.462320   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:42.736637   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:42.954415   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:42.954436   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:42.961598   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:42.962478   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:43.237369   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:43.453868   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:43.454110   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:43.462867   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:43.737452   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:43.953959   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:43.954050   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:43.963005   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:44.236499   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:44.454188   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:44.454364   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:44.462600   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:44.737175   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:44.953965   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:44.954058   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:44.961955   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:44.962930   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:45.237686   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:45.454415   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:45.454484   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:45.462280   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:45.737001   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:45.953792   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:45.953959   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:45.962810   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:46.237614   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:46.454249   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:46.454385   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:46.462961   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:46.737606   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:46.954265   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:46.954413   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:46.963159   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:47.237348   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:47.454176   10554 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1129 08:29:47.454202   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:47.454408   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:47.460860   10554 node_ready.go:49] node "addons-053273" is "Ready"
	I1129 08:29:47.460881   10554 node_ready.go:38] duration metric: took 41.501824884s for node "addons-053273" to be "Ready" ...
	I1129 08:29:47.460895   10554 api_server.go:52] waiting for apiserver process to appear ...
	I1129 08:29:47.460939   10554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 08:29:47.462595   10554 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1129 08:29:47.462614   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:47.477301   10554 api_server.go:72] duration metric: took 42.038517373s to wait for apiserver process to appear ...
	I1129 08:29:47.477329   10554 api_server.go:88] waiting for apiserver healthz status ...
	I1129 08:29:47.477350   10554 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1129 08:29:47.482430   10554 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1129 08:29:47.483413   10554 api_server.go:141] control plane version: v1.34.1
	I1129 08:29:47.483445   10554 api_server.go:131] duration metric: took 6.109655ms to wait for apiserver health ...
	I1129 08:29:47.483458   10554 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 08:29:47.555729   10554 system_pods.go:59] 20 kube-system pods found
	I1129 08:29:47.555771   10554 system_pods.go:61] "amd-gpu-device-plugin-d5jts" [d49cc084-87de-4151-bd07-7d32d21a3754] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1129 08:29:47.555787   10554 system_pods.go:61] "coredns-66bc5c9577-kpln4" [0c815eba-8b6a-47d4-8b05-a715b3dcd17a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 08:29:47.555799   10554 system_pods.go:61] "csi-hostpath-attacher-0" [1f391ce2-abb9-4600-8971-d31b368252f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 08:29:47.555808   10554 system_pods.go:61] "csi-hostpath-resizer-0" [945a92c3-b309-4830-a893-b5cd9c3ae0d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 08:29:47.555817   10554 system_pods.go:61] "csi-hostpathplugin-rvvrd" [813a302e-fd2c-452f-be53-e9bdf6ee6f60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 08:29:47.555853   10554 system_pods.go:61] "etcd-addons-053273" [86ef8f9c-32d8-4877-b010-6a88d85de53c] Running
	I1129 08:29:47.555868   10554 system_pods.go:61] "kindnet-xqwm5" [e96900d3-e678-4123-a26a-9924fdc05772] Running
	I1129 08:29:47.555874   10554 system_pods.go:61] "kube-apiserver-addons-053273" [f51f767c-fffd-4e64-b566-b1a6123060a9] Running
	I1129 08:29:47.555883   10554 system_pods.go:61] "kube-controller-manager-addons-053273" [4d4cffd1-c5e0-4542-842c-7db6cb701e0b] Running
	I1129 08:29:47.555892   10554 system_pods.go:61] "kube-ingress-dns-minikube" [2acc9ff6-35bf-4f93-b76a-4e02d6a36cf8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 08:29:47.555901   10554 system_pods.go:61] "kube-proxy-2gkdk" [fd0daf23-4091-4668-9729-627e0356bc5b] Running
	I1129 08:29:47.555907   10554 system_pods.go:61] "kube-scheduler-addons-053273" [7aed405a-86cf-411c-a827-b79a6935d5f0] Running
	I1129 08:29:47.555917   10554 system_pods.go:61] "metrics-server-85b7d694d7-48dhj" [64d2fc70-f8ba-4c90-aae2-41bb30f04b8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 08:29:47.555929   10554 system_pods.go:61] "nvidia-device-plugin-daemonset-52bjw" [111c52a4-32cd-4beb-a0ed-11bcc2e5bf21] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 08:29:47.555939   10554 system_pods.go:61] "registry-6b586f9694-gt598" [43f7eae2-b891-44d2-80dc-8650bee1c9d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 08:29:47.555950   10554 system_pods.go:61] "registry-creds-764b6fb674-ktw8b" [e715e721-8143-4728-b353-67ec7cddd186] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 08:29:47.555967   10554 system_pods.go:61] "registry-proxy-zsxkb" [417acc75-98bb-45c2-a648-a0e941620c8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 08:29:47.555979   10554 system_pods.go:61] "snapshot-controller-7d9fbc56b8-lrhxm" [9268b5e7-d8c2-49ed-8980-a63d87cecb6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:47.555992   10554 system_pods.go:61] "snapshot-controller-7d9fbc56b8-q48mh" [a4b61b82-5f2f-41e3-98a2-7b12080d1a1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:47.556004   10554 system_pods.go:61] "storage-provisioner" [42b3498b-6992-4f91-b7bd-bd29a41526d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 08:29:47.556015   10554 system_pods.go:74] duration metric: took 72.550166ms to wait for pod list to return data ...
	I1129 08:29:47.556030   10554 default_sa.go:34] waiting for default service account to be created ...
	I1129 08:29:47.558468   10554 default_sa.go:45] found service account: "default"
	I1129 08:29:47.558502   10554 default_sa.go:55] duration metric: took 2.452748ms for default service account to be created ...
	I1129 08:29:47.558511   10554 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 08:29:47.562454   10554 system_pods.go:86] 20 kube-system pods found
	I1129 08:29:47.562494   10554 system_pods.go:89] "amd-gpu-device-plugin-d5jts" [d49cc084-87de-4151-bd07-7d32d21a3754] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1129 08:29:47.562505   10554 system_pods.go:89] "coredns-66bc5c9577-kpln4" [0c815eba-8b6a-47d4-8b05-a715b3dcd17a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 08:29:47.562516   10554 system_pods.go:89] "csi-hostpath-attacher-0" [1f391ce2-abb9-4600-8971-d31b368252f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 08:29:47.562525   10554 system_pods.go:89] "csi-hostpath-resizer-0" [945a92c3-b309-4830-a893-b5cd9c3ae0d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 08:29:47.562535   10554 system_pods.go:89] "csi-hostpathplugin-rvvrd" [813a302e-fd2c-452f-be53-e9bdf6ee6f60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 08:29:47.562541   10554 system_pods.go:89] "etcd-addons-053273" [86ef8f9c-32d8-4877-b010-6a88d85de53c] Running
	I1129 08:29:47.562566   10554 system_pods.go:89] "kindnet-xqwm5" [e96900d3-e678-4123-a26a-9924fdc05772] Running
	I1129 08:29:47.562573   10554 system_pods.go:89] "kube-apiserver-addons-053273" [f51f767c-fffd-4e64-b566-b1a6123060a9] Running
	I1129 08:29:47.562579   10554 system_pods.go:89] "kube-controller-manager-addons-053273" [4d4cffd1-c5e0-4542-842c-7db6cb701e0b] Running
	I1129 08:29:47.562594   10554 system_pods.go:89] "kube-ingress-dns-minikube" [2acc9ff6-35bf-4f93-b76a-4e02d6a36cf8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 08:29:47.562602   10554 system_pods.go:89] "kube-proxy-2gkdk" [fd0daf23-4091-4668-9729-627e0356bc5b] Running
	I1129 08:29:47.562608   10554 system_pods.go:89] "kube-scheduler-addons-053273" [7aed405a-86cf-411c-a827-b79a6935d5f0] Running
	I1129 08:29:47.562618   10554 system_pods.go:89] "metrics-server-85b7d694d7-48dhj" [64d2fc70-f8ba-4c90-aae2-41bb30f04b8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 08:29:47.562630   10554 system_pods.go:89] "nvidia-device-plugin-daemonset-52bjw" [111c52a4-32cd-4beb-a0ed-11bcc2e5bf21] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 08:29:47.562639   10554 system_pods.go:89] "registry-6b586f9694-gt598" [43f7eae2-b891-44d2-80dc-8650bee1c9d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 08:29:47.562648   10554 system_pods.go:89] "registry-creds-764b6fb674-ktw8b" [e715e721-8143-4728-b353-67ec7cddd186] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 08:29:47.562656   10554 system_pods.go:89] "registry-proxy-zsxkb" [417acc75-98bb-45c2-a648-a0e941620c8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 08:29:47.562666   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lrhxm" [9268b5e7-d8c2-49ed-8980-a63d87cecb6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:47.562674   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q48mh" [a4b61b82-5f2f-41e3-98a2-7b12080d1a1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:47.562681   10554 system_pods.go:89] "storage-provisioner" [42b3498b-6992-4f91-b7bd-bd29a41526d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 08:29:47.562699   10554 retry.go:31] will retry after 285.944425ms: missing components: kube-dns
	I1129 08:29:47.737706   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:47.853714   10554 system_pods.go:86] 20 kube-system pods found
	I1129 08:29:47.853749   10554 system_pods.go:89] "amd-gpu-device-plugin-d5jts" [d49cc084-87de-4151-bd07-7d32d21a3754] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1129 08:29:47.853761   10554 system_pods.go:89] "coredns-66bc5c9577-kpln4" [0c815eba-8b6a-47d4-8b05-a715b3dcd17a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 08:29:47.853772   10554 system_pods.go:89] "csi-hostpath-attacher-0" [1f391ce2-abb9-4600-8971-d31b368252f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 08:29:47.853780   10554 system_pods.go:89] "csi-hostpath-resizer-0" [945a92c3-b309-4830-a893-b5cd9c3ae0d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 08:29:47.853788   10554 system_pods.go:89] "csi-hostpathplugin-rvvrd" [813a302e-fd2c-452f-be53-e9bdf6ee6f60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 08:29:47.853794   10554 system_pods.go:89] "etcd-addons-053273" [86ef8f9c-32d8-4877-b010-6a88d85de53c] Running
	I1129 08:29:47.853800   10554 system_pods.go:89] "kindnet-xqwm5" [e96900d3-e678-4123-a26a-9924fdc05772] Running
	I1129 08:29:47.853810   10554 system_pods.go:89] "kube-apiserver-addons-053273" [f51f767c-fffd-4e64-b566-b1a6123060a9] Running
	I1129 08:29:47.853815   10554 system_pods.go:89] "kube-controller-manager-addons-053273" [4d4cffd1-c5e0-4542-842c-7db6cb701e0b] Running
	I1129 08:29:47.853823   10554 system_pods.go:89] "kube-ingress-dns-minikube" [2acc9ff6-35bf-4f93-b76a-4e02d6a36cf8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 08:29:47.853827   10554 system_pods.go:89] "kube-proxy-2gkdk" [fd0daf23-4091-4668-9729-627e0356bc5b] Running
	I1129 08:29:47.853831   10554 system_pods.go:89] "kube-scheduler-addons-053273" [7aed405a-86cf-411c-a827-b79a6935d5f0] Running
	I1129 08:29:47.853836   10554 system_pods.go:89] "metrics-server-85b7d694d7-48dhj" [64d2fc70-f8ba-4c90-aae2-41bb30f04b8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 08:29:47.853869   10554 system_pods.go:89] "nvidia-device-plugin-daemonset-52bjw" [111c52a4-32cd-4beb-a0ed-11bcc2e5bf21] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 08:29:47.853883   10554 system_pods.go:89] "registry-6b586f9694-gt598" [43f7eae2-b891-44d2-80dc-8650bee1c9d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 08:29:47.853891   10554 system_pods.go:89] "registry-creds-764b6fb674-ktw8b" [e715e721-8143-4728-b353-67ec7cddd186] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 08:29:47.853897   10554 system_pods.go:89] "registry-proxy-zsxkb" [417acc75-98bb-45c2-a648-a0e941620c8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 08:29:47.853905   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lrhxm" [9268b5e7-d8c2-49ed-8980-a63d87cecb6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:47.853918   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q48mh" [a4b61b82-5f2f-41e3-98a2-7b12080d1a1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:47.853927   10554 system_pods.go:89] "storage-provisioner" [42b3498b-6992-4f91-b7bd-bd29a41526d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 08:29:47.853946   10554 retry.go:31] will retry after 234.686233ms: missing components: kube-dns
	I1129 08:29:47.954512   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:47.954656   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:47.963323   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:48.093946   10554 system_pods.go:86] 20 kube-system pods found
	I1129 08:29:48.093977   10554 system_pods.go:89] "amd-gpu-device-plugin-d5jts" [d49cc084-87de-4151-bd07-7d32d21a3754] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1129 08:29:48.093987   10554 system_pods.go:89] "coredns-66bc5c9577-kpln4" [0c815eba-8b6a-47d4-8b05-a715b3dcd17a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 08:29:48.093998   10554 system_pods.go:89] "csi-hostpath-attacher-0" [1f391ce2-abb9-4600-8971-d31b368252f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 08:29:48.094007   10554 system_pods.go:89] "csi-hostpath-resizer-0" [945a92c3-b309-4830-a893-b5cd9c3ae0d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 08:29:48.094019   10554 system_pods.go:89] "csi-hostpathplugin-rvvrd" [813a302e-fd2c-452f-be53-e9bdf6ee6f60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 08:29:48.094029   10554 system_pods.go:89] "etcd-addons-053273" [86ef8f9c-32d8-4877-b010-6a88d85de53c] Running
	I1129 08:29:48.094036   10554 system_pods.go:89] "kindnet-xqwm5" [e96900d3-e678-4123-a26a-9924fdc05772] Running
	I1129 08:29:48.094044   10554 system_pods.go:89] "kube-apiserver-addons-053273" [f51f767c-fffd-4e64-b566-b1a6123060a9] Running
	I1129 08:29:48.094049   10554 system_pods.go:89] "kube-controller-manager-addons-053273" [4d4cffd1-c5e0-4542-842c-7db6cb701e0b] Running
	I1129 08:29:48.094060   10554 system_pods.go:89] "kube-ingress-dns-minikube" [2acc9ff6-35bf-4f93-b76a-4e02d6a36cf8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 08:29:48.094066   10554 system_pods.go:89] "kube-proxy-2gkdk" [fd0daf23-4091-4668-9729-627e0356bc5b] Running
	I1129 08:29:48.094078   10554 system_pods.go:89] "kube-scheduler-addons-053273" [7aed405a-86cf-411c-a827-b79a6935d5f0] Running
	I1129 08:29:48.094087   10554 system_pods.go:89] "metrics-server-85b7d694d7-48dhj" [64d2fc70-f8ba-4c90-aae2-41bb30f04b8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 08:29:48.094101   10554 system_pods.go:89] "nvidia-device-plugin-daemonset-52bjw" [111c52a4-32cd-4beb-a0ed-11bcc2e5bf21] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 08:29:48.094110   10554 system_pods.go:89] "registry-6b586f9694-gt598" [43f7eae2-b891-44d2-80dc-8650bee1c9d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 08:29:48.094127   10554 system_pods.go:89] "registry-creds-764b6fb674-ktw8b" [e715e721-8143-4728-b353-67ec7cddd186] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 08:29:48.094137   10554 system_pods.go:89] "registry-proxy-zsxkb" [417acc75-98bb-45c2-a648-a0e941620c8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 08:29:48.094146   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lrhxm" [9268b5e7-d8c2-49ed-8980-a63d87cecb6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:48.094157   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q48mh" [a4b61b82-5f2f-41e3-98a2-7b12080d1a1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:48.094168   10554 system_pods.go:89] "storage-provisioner" [42b3498b-6992-4f91-b7bd-bd29a41526d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 08:29:48.094186   10554 retry.go:31] will retry after 463.425795ms: missing components: kube-dns
	I1129 08:29:48.237055   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:48.455428   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:48.455634   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:48.466616   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:48.568145   10554 system_pods.go:86] 20 kube-system pods found
	I1129 08:29:48.568179   10554 system_pods.go:89] "amd-gpu-device-plugin-d5jts" [d49cc084-87de-4151-bd07-7d32d21a3754] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1129 08:29:48.568185   10554 system_pods.go:89] "coredns-66bc5c9577-kpln4" [0c815eba-8b6a-47d4-8b05-a715b3dcd17a] Running
	I1129 08:29:48.568194   10554 system_pods.go:89] "csi-hostpath-attacher-0" [1f391ce2-abb9-4600-8971-d31b368252f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 08:29:48.568205   10554 system_pods.go:89] "csi-hostpath-resizer-0" [945a92c3-b309-4830-a893-b5cd9c3ae0d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 08:29:48.568215   10554 system_pods.go:89] "csi-hostpathplugin-rvvrd" [813a302e-fd2c-452f-be53-e9bdf6ee6f60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 08:29:48.568220   10554 system_pods.go:89] "etcd-addons-053273" [86ef8f9c-32d8-4877-b010-6a88d85de53c] Running
	I1129 08:29:48.568227   10554 system_pods.go:89] "kindnet-xqwm5" [e96900d3-e678-4123-a26a-9924fdc05772] Running
	I1129 08:29:48.568237   10554 system_pods.go:89] "kube-apiserver-addons-053273" [f51f767c-fffd-4e64-b566-b1a6123060a9] Running
	I1129 08:29:48.568243   10554 system_pods.go:89] "kube-controller-manager-addons-053273" [4d4cffd1-c5e0-4542-842c-7db6cb701e0b] Running
	I1129 08:29:48.568255   10554 system_pods.go:89] "kube-ingress-dns-minikube" [2acc9ff6-35bf-4f93-b76a-4e02d6a36cf8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 08:29:48.568262   10554 system_pods.go:89] "kube-proxy-2gkdk" [fd0daf23-4091-4668-9729-627e0356bc5b] Running
	I1129 08:29:48.568270   10554 system_pods.go:89] "kube-scheduler-addons-053273" [7aed405a-86cf-411c-a827-b79a6935d5f0] Running
	I1129 08:29:48.568290   10554 system_pods.go:89] "metrics-server-85b7d694d7-48dhj" [64d2fc70-f8ba-4c90-aae2-41bb30f04b8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 08:29:48.568304   10554 system_pods.go:89] "nvidia-device-plugin-daemonset-52bjw" [111c52a4-32cd-4beb-a0ed-11bcc2e5bf21] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 08:29:48.568317   10554 system_pods.go:89] "registry-6b586f9694-gt598" [43f7eae2-b891-44d2-80dc-8650bee1c9d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 08:29:48.568326   10554 system_pods.go:89] "registry-creds-764b6fb674-ktw8b" [e715e721-8143-4728-b353-67ec7cddd186] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 08:29:48.568336   10554 system_pods.go:89] "registry-proxy-zsxkb" [417acc75-98bb-45c2-a648-a0e941620c8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 08:29:48.568348   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lrhxm" [9268b5e7-d8c2-49ed-8980-a63d87cecb6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:48.568360   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q48mh" [a4b61b82-5f2f-41e3-98a2-7b12080d1a1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:48.568368   10554 system_pods.go:89] "storage-provisioner" [42b3498b-6992-4f91-b7bd-bd29a41526d6] Running
	I1129 08:29:48.568379   10554 system_pods.go:126] duration metric: took 1.009861098s to wait for k8s-apps to be running ...
	I1129 08:29:48.568391   10554 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 08:29:48.568439   10554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:29:48.585050   10554 system_svc.go:56] duration metric: took 16.648806ms WaitForService to wait for kubelet
	I1129 08:29:48.585082   10554 kubeadm.go:587] duration metric: took 43.146302041s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 08:29:48.585109   10554 node_conditions.go:102] verifying NodePressure condition ...
	I1129 08:29:48.587993   10554 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 08:29:48.588021   10554 node_conditions.go:123] node cpu capacity is 8
	I1129 08:29:48.588042   10554 node_conditions.go:105] duration metric: took 2.926567ms to run NodePressure ...
	I1129 08:29:48.588057   10554 start.go:242] waiting for startup goroutines ...
	I1129 08:29:48.737757   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:48.958442   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:48.958763   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:48.964402   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:49.238821   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:49.455019   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:49.456203   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:49.466089   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:49.738999   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:49.955503   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:49.956237   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:49.964764   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:50.238025   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:50.455466   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:50.455591   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:50.463807   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:50.737491   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:50.955219   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:50.955453   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:50.963785   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:51.237986   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:51.454745   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:51.455212   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:51.463745   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:51.737982   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:51.955169   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:51.955216   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:51.964017   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:52.237733   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:52.455516   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:52.455545   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:52.463018   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:52.737662   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:52.954675   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:52.954717   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:52.962743   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:53.237554   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:53.455641   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:53.456304   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:53.465219   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:53.737746   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:53.955126   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:53.955206   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:53.963588   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:54.237952   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:54.455129   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:54.455376   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:54.463991   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:54.737661   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:54.955579   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:54.955685   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:54.964243   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:55.238474   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:55.454680   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:55.454801   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:55.464104   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:55.737239   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:55.954515   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:55.954616   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:55.963284   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:56.237428   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:56.454431   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:56.454503   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:56.464056   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:56.737941   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:56.955247   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:56.955271   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:56.963564   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:57.237883   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:57.454762   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:57.454791   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:57.463640   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:57.738984   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:57.954797   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:57.954989   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:57.963158   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:58.236674   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:58.455160   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:58.455317   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:58.464077   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:58.738418   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:58.954586   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:58.954710   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:58.963233   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:59.237299   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:59.454309   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:59.454406   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:59.464267   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:59.737625   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:59.954879   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:59.954908   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:59.963583   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:00.237622   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:00.454538   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:00.454789   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:00.462950   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:00.738140   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:00.954585   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:00.954610   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:00.962466   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:01.237294   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:01.454549   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:01.454676   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:01.463538   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:01.737248   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:01.954292   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:01.954607   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:01.963814   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:02.237835   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:02.455172   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:02.455221   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:02.463892   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:02.737625   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:02.954678   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:02.954720   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:02.963029   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:03.238237   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:03.454535   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:03.454609   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:03.464322   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:03.737149   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:03.954365   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:03.954465   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:03.964580   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:04.237060   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:04.454994   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:04.455055   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:04.463010   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:04.737907   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:04.954948   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:04.955003   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:04.963759   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:05.238025   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:05.455164   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:05.455194   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:05.463555   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:05.737413   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:05.954270   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:05.954395   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:05.963569   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:06.237372   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:06.454235   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:06.454272   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:06.463440   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:06.737134   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:06.953788   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:06.953928   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:06.963146   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:07.236811   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:07.454419   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:07.454451   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:07.463485   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:07.737828   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:07.954781   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:07.955671   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:07.963051   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:08.238148   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:08.454137   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:08.454355   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:08.463811   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:08.738188   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:08.955225   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:08.955276   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:08.963778   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:09.237445   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:09.454763   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:09.454977   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:09.463662   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:09.737462   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:09.954541   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:09.954616   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:09.964043   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:10.237989   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:10.454904   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:10.455026   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:10.463700   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:10.739524   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:10.954752   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:10.954820   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:10.963249   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:11.237380   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:11.454276   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:11.454383   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:11.463188   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:11.738961   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:11.956123   10554 kapi.go:107] duration metric: took 1m5.004912689s to wait for kubernetes.io/minikube-addons=registry ...
	I1129 08:30:11.956398   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:11.964269   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:12.239754   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:12.455470   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:12.464175   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:12.737462   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:12.953836   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:12.963118   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:13.236626   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:13.454918   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:13.463294   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:13.738189   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:13.955657   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:13.963291   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:14.236857   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:14.454990   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:14.463616   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:14.737125   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:14.955004   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:14.963628   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:15.237477   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:15.455042   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:15.464057   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:15.736957   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:15.955199   10554 kapi.go:107] duration metric: took 1m9.003989625s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1129 08:30:15.963735   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:16.238064   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:16.463526   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:16.737301   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:16.964699   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:17.237720   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:17.463995   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:17.737613   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:17.963964   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:18.237888   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:18.463994   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:18.742656   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:18.963917   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:19.238704   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:19.463882   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:19.737834   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:19.963664   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:20.237357   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:20.465087   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:20.739458   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:20.964343   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:21.236630   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:21.463954   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:21.738212   10554 kapi.go:107] duration metric: took 1m8.004054797s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1129 08:30:21.739626   10554 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-053273 cluster.
	I1129 08:30:21.741294   10554 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1129 08:30:21.742562   10554 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1129 08:30:21.964611   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:22.464344   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:22.963885   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:23.463615   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:23.964504   10554 kapi.go:107] duration metric: took 1m16.504077956s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1129 08:30:23.966165   10554 out.go:179] * Enabled addons: cloud-spanner, storage-provisioner, registry-creds, amd-gpu-device-plugin, inspektor-gadget, ingress-dns, metrics-server, yakd, default-storageclass, nvidia-device-plugin, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1129 08:30:23.967288   10554 addons.go:530] duration metric: took 1m18.528488133s for enable addons: enabled=[cloud-spanner storage-provisioner registry-creds amd-gpu-device-plugin inspektor-gadget ingress-dns metrics-server yakd default-storageclass nvidia-device-plugin volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1129 08:30:23.967329   10554 start.go:247] waiting for cluster config update ...
	I1129 08:30:23.967345   10554 start.go:256] writing updated cluster config ...
	I1129 08:30:23.967590   10554 ssh_runner.go:195] Run: rm -f paused
	I1129 08:30:23.971658   10554 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 08:30:23.974687   10554 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kpln4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:23.978642   10554 pod_ready.go:94] pod "coredns-66bc5c9577-kpln4" is "Ready"
	I1129 08:30:23.978663   10554 pod_ready.go:86] duration metric: took 3.953847ms for pod "coredns-66bc5c9577-kpln4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:23.980464   10554 pod_ready.go:83] waiting for pod "etcd-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:23.984014   10554 pod_ready.go:94] pod "etcd-addons-053273" is "Ready"
	I1129 08:30:23.984038   10554 pod_ready.go:86] duration metric: took 3.552545ms for pod "etcd-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:23.985653   10554 pod_ready.go:83] waiting for pod "kube-apiserver-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:23.988881   10554 pod_ready.go:94] pod "kube-apiserver-addons-053273" is "Ready"
	I1129 08:30:23.988904   10554 pod_ready.go:86] duration metric: took 3.229995ms for pod "kube-apiserver-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:23.990537   10554 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:24.375559   10554 pod_ready.go:94] pod "kube-controller-manager-addons-053273" is "Ready"
	I1129 08:30:24.375589   10554 pod_ready.go:86] duration metric: took 385.028985ms for pod "kube-controller-manager-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:24.575685   10554 pod_ready.go:83] waiting for pod "kube-proxy-2gkdk" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:24.975284   10554 pod_ready.go:94] pod "kube-proxy-2gkdk" is "Ready"
	I1129 08:30:24.975311   10554 pod_ready.go:86] duration metric: took 399.604211ms for pod "kube-proxy-2gkdk" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:25.175384   10554 pod_ready.go:83] waiting for pod "kube-scheduler-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:25.574942   10554 pod_ready.go:94] pod "kube-scheduler-addons-053273" is "Ready"
	I1129 08:30:25.574972   10554 pod_ready.go:86] duration metric: took 399.564843ms for pod "kube-scheduler-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:25.574988   10554 pod_ready.go:40] duration metric: took 1.603296646s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 08:30:25.619034   10554 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 08:30:25.621029   10554 out.go:179] * Done! kubectl is now configured to use "addons-053273" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.033831083Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-bdwzf/POD" id=716d6147-fc2b-48a8-8137-e0fd98527f30 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.033945665Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.041784823Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-bdwzf Namespace:default ID:9900b19cd090049a380fcbfa686fcfbed9384582b113622dff28d50c70ac6132 UID:c1f74a2c-e756-4f98-ac95-0747f7c6987d NetNS:/var/run/netns/4374961a-516d-4835-abf3-418f5bc38dc9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00054a888}] Aliases:map[]}"
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.041828206Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-bdwzf to CNI network \"kindnet\" (type=ptp)"
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.052673681Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-bdwzf Namespace:default ID:9900b19cd090049a380fcbfa686fcfbed9384582b113622dff28d50c70ac6132 UID:c1f74a2c-e756-4f98-ac95-0747f7c6987d NetNS:/var/run/netns/4374961a-516d-4835-abf3-418f5bc38dc9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00054a888}] Aliases:map[]}"
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.052879086Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-bdwzf for CNI network kindnet (type=ptp)"
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.054191723Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.05543927Z" level=info msg="Ran pod sandbox 9900b19cd090049a380fcbfa686fcfbed9384582b113622dff28d50c70ac6132 with infra container: default/hello-world-app-5d498dc89-bdwzf/POD" id=716d6147-fc2b-48a8-8137-e0fd98527f30 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.056897563Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=434788db-4076-494e-8c46-b4f41ae38aa1 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.057021186Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=434788db-4076-494e-8c46-b4f41ae38aa1 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.057056408Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=434788db-4076-494e-8c46-b4f41ae38aa1 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.057669499Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=22a37398-edfd-44db-9f2d-6df8e1567521 name=/runtime.v1.ImageService/PullImage
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.07325638Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.441805946Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=22a37398-edfd-44db-9f2d-6df8e1567521 name=/runtime.v1.ImageService/PullImage
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.44243933Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5bb06122-90c3-4905-88a0-ac37f6ddf4d4 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.4438648Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=61fbfe61-7f8a-403c-9747-a90903e01a07 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.447492776Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-bdwzf/hello-world-app" id=c119e549-a6fc-4409-8b71-631eeda81219 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.447619584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.453556608Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.453763385Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c97487866d2d1fdc2f23c71cb92abbc3606a108a91ae4c39727874cdcd2ce0f0/merged/etc/passwd: no such file or directory"
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.453794687Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c97487866d2d1fdc2f23c71cb92abbc3606a108a91ae4c39727874cdcd2ce0f0/merged/etc/group: no such file or directory"
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.454067888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.492663926Z" level=info msg="Created container 49ed6b968f8be765e8b5bcf9bfcf63f27ce7fdaa23031c58e1d92a19d8892a8c: default/hello-world-app-5d498dc89-bdwzf/hello-world-app" id=c119e549-a6fc-4409-8b71-631eeda81219 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.49344289Z" level=info msg="Starting container: 49ed6b968f8be765e8b5bcf9bfcf63f27ce7fdaa23031c58e1d92a19d8892a8c" id=a01ab69f-601e-477e-8074-39bfff04c837 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 08:33:00 addons-053273 crio[769]: time="2025-11-29T08:33:00.495827451Z" level=info msg="Started container" PID=9544 containerID=49ed6b968f8be765e8b5bcf9bfcf63f27ce7fdaa23031c58e1d92a19d8892a8c description=default/hello-world-app-5d498dc89-bdwzf/hello-world-app id=a01ab69f-601e-477e-8074-39bfff04c837 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9900b19cd090049a380fcbfa686fcfbed9384582b113622dff28d50c70ac6132
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	49ed6b968f8be       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   9900b19cd0900       hello-world-app-5d498dc89-bdwzf            default
	8026c36cf38b7       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   219c100b310e0       registry-creds-764b6fb674-ktw8b            kube-system
	ab345d2173649       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   f7ecb3bf99b7c       nginx                                      default
	13245012d20a5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   fe54bde1470e4       busybox                                    default
	36eaff53da4c2       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   ffc88537c4dd3       csi-hostpathplugin-rvvrd                   kube-system
	ad9d7cb785c36       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   ffc88537c4dd3       csi-hostpathplugin-rvvrd                   kube-system
	ab372da9d790c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   9c6ee2cbafb88       gcp-auth-78565c9fb4-msfdv                  gcp-auth
	d81ef79dbc162       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   ffc88537c4dd3       csi-hostpathplugin-rvvrd                   kube-system
	847e89dd7511c       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   ffc88537c4dd3       csi-hostpathplugin-rvvrd                   kube-system
	d11c4362f35fc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago            Running             gadget                                   0                   939d498dc4ebe       gadget-bcrxg                               gadget
	f6278bb7605d8       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   ffc88537c4dd3       csi-hostpathplugin-rvvrd                   kube-system
	6c65eafee7031       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago            Running             controller                               0                   c14c3fa0cf38b       ingress-nginx-controller-6c8bf45fb-49927   ingress-nginx
	0d8ff87c1bbda       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   17d0d15b00d60       registry-proxy-zsxkb                       kube-system
	4f7efb63753a4       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     2 minutes ago            Running             nvidia-device-plugin-ctr                 0                   f72dc54d2bd40       nvidia-device-plugin-daemonset-52bjw       kube-system
	5858e5f4e311e       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago            Running             csi-external-health-monitor-controller   0                   ffc88537c4dd3       csi-hostpathplugin-rvvrd                   kube-system
	73fddce698a89       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     2 minutes ago            Running             amd-gpu-device-plugin                    0                   372be2070663b       amd-gpu-device-plugin-d5jts                kube-system
	e00a3c9e09fd2       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago            Running             volume-snapshot-controller               0                   a19ef62f6d68c       snapshot-controller-7d9fbc56b8-lrhxm       kube-system
	25051b979640f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago            Running             volume-snapshot-controller               0                   f5106142b8759       snapshot-controller-7d9fbc56b8-q48mh       kube-system
	89340d645aa11       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              2 minutes ago            Running             csi-resizer                              0                   366a3e72ccee3       csi-hostpath-resizer-0                     kube-system
	5a1b833eaac88       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   2 minutes ago            Exited              patch                                    0                   3ac42e5437d83       ingress-nginx-admission-patch-hhlkx        ingress-nginx
	bf1f819dfa2fc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   2 minutes ago            Exited              create                                   0                   8ce18433a8ddd       ingress-nginx-admission-create-mnxkr       ingress-nginx
	b181b0edd3181       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   c06a8c8cff9e8       cloud-spanner-emulator-5bdddb765-4krxw     default
	5974b84e4fa77       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   3966d25f69b01       csi-hostpath-attacher-0                    kube-system
	e84f185b68fcf       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   31cba181447cd       yakd-dashboard-5ff678cb9-bxgzw             yakd-dashboard
	a39e2096a5720       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   3233fa902512f       local-path-provisioner-648f6765c9-9nfgr    local-path-storage
	cccc47c19980c       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   0b08f455bffca       registry-6b586f9694-gt598                  kube-system
	9dd9f49e78582       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   767d0f7a26bc3       kube-ingress-dns-minikube                  kube-system
	ab73d32295897       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   562083ab5cb26       metrics-server-85b7d694d7-48dhj            kube-system
	5415e4a1867a4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   dddb7aeaa7f34       coredns-66bc5c9577-kpln4                   kube-system
	f2891b2f589bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   2a3c640c2c0fd       storage-provisioner                        kube-system
	536ae01d9c834       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             3 minutes ago            Running             kube-proxy                               0                   382f06683ff51       kube-proxy-2gkdk                           kube-system
	e7636733a471a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             3 minutes ago            Running             kindnet-cni                              0                   aa0650db1b9c7       kindnet-xqwm5                              kube-system
	e64fa5518306f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   b51e06b2e8480       kube-apiserver-addons-053273               kube-system
	42985a54cfd5e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   5186d76f5e9a8       kube-scheduler-addons-053273               kube-system
	97e8e47987d6c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   cb112577a4bda       etcd-addons-053273                         kube-system
	fa034ef6fed4e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   08a1beb60974a       kube-controller-manager-addons-053273      kube-system
	
	
	==> coredns [5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b] <==
	[INFO] 10.244.0.22:32875 - 29187 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000181054s
	[INFO] 10.244.0.22:48355 - 63318 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004851052s
	[INFO] 10.244.0.22:55104 - 15886 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00670072s
	[INFO] 10.244.0.22:41019 - 52352 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004179153s
	[INFO] 10.244.0.22:48708 - 17314 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004324775s
	[INFO] 10.244.0.22:38782 - 636 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003679518s
	[INFO] 10.244.0.22:51141 - 39325 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007276491s
	[INFO] 10.244.0.22:46590 - 39527 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002272497s
	[INFO] 10.244.0.22:41575 - 28120 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002160821s
	[INFO] 10.244.0.27:53912 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000232196s
	[INFO] 10.244.0.27:53046 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000158747s
	[INFO] 10.244.0.29:35786 - 42414 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000201695s
	[INFO] 10.244.0.29:37783 - 4714 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000162149s
	[INFO] 10.244.0.29:45153 - 43410 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000115862s
	[INFO] 10.244.0.29:55358 - 50067 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.00018894s
	[INFO] 10.244.0.29:58372 - 5632 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000115492s
	[INFO] 10.244.0.29:50413 - 46920 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000154581s
	[INFO] 10.244.0.29:37364 - 57243 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.007787592s
	[INFO] 10.244.0.29:33094 - 18018 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.00854761s
	[INFO] 10.244.0.29:45026 - 4313 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004558491s
	[INFO] 10.244.0.29:47948 - 58856 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005350024s
	[INFO] 10.244.0.29:53559 - 27932 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004821279s
	[INFO] 10.244.0.29:45754 - 43675 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.006182438s
	[INFO] 10.244.0.29:57737 - 15126 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001585229s
	[INFO] 10.244.0.29:52217 - 15057 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001654825s
	
	
	==> describe nodes <==
	Name:               addons-053273
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-053273
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=addons-053273
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T08_29_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-053273
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-053273"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 08:28:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-053273
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 08:32:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 08:31:32 +0000   Sat, 29 Nov 2025 08:28:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 08:31:32 +0000   Sat, 29 Nov 2025 08:28:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 08:31:32 +0000   Sat, 29 Nov 2025 08:28:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 08:31:32 +0000   Sat, 29 Nov 2025 08:29:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-053273
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                1ccd1d4c-726c-4b43-bb60-99ea539b61bc
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  default                     cloud-spanner-emulator-5bdddb765-4krxw      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  default                     hello-world-app-5d498dc89-bdwzf             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-bcrxg                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  gcp-auth                    gcp-auth-78565c9fb4-msfdv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-49927    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m55s
	  kube-system                 amd-gpu-device-plugin-d5jts                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  kube-system                 coredns-66bc5c9577-kpln4                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m56s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 csi-hostpathplugin-rvvrd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  kube-system                 etcd-addons-053273                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m2s
	  kube-system                 kindnet-xqwm5                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m57s
	  kube-system                 kube-apiserver-addons-053273                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-controller-manager-addons-053273       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 kube-proxy-2gkdk                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 kube-scheduler-addons-053273                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 metrics-server-85b7d694d7-48dhj             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m55s
	  kube-system                 nvidia-device-plugin-daemonset-52bjw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  kube-system                 registry-6b586f9694-gt598                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 registry-creds-764b6fb674-ktw8b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 registry-proxy-zsxkb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  kube-system                 snapshot-controller-7d9fbc56b8-lrhxm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 snapshot-controller-7d9fbc56b8-q48mh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  local-path-storage          local-path-provisioner-648f6765c9-9nfgr     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-bxgzw              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m54s  kube-proxy       
	  Normal  Starting                 4m2s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m2s   kubelet          Node addons-053273 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s   kubelet          Node addons-053273 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s   kubelet          Node addons-053273 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m57s  node-controller  Node addons-053273 event: Registered Node addons-053273 in Controller
	  Normal  NodeReady                3m14s  kubelet          Node addons-053273 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.088968] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025527] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.969002] kauditd_printk_skb: 47 callbacks suppressed
	[Nov29 08:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.030577] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +2.047756] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +4.031543] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[Nov29 08:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[ +16.382281] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[ +32.252561] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	
	
	==> etcd [97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31] <==
	{"level":"warn","ts":"2025-11-29T08:28:56.625943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.632403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.638651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.650976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.657498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.663836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.669548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.676257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.682174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.688134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.694137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.699830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.705938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.712661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.718385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.736113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.742801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.749666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.797436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:29:08.012975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:29:08.019874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:29:34.188720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:29:34.194933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:29:34.215480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:29:34.222003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34732","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [ab372da9d790cd8b860ba694fee97376b9817e969256c677c3f3d3d70d64cabb] <==
	2025/11/29 08:30:21 GCP Auth Webhook started!
	2025/11/29 08:30:25 Ready to marshal response ...
	2025/11/29 08:30:25 Ready to write response ...
	2025/11/29 08:30:26 Ready to marshal response ...
	2025/11/29 08:30:26 Ready to write response ...
	2025/11/29 08:30:26 Ready to marshal response ...
	2025/11/29 08:30:26 Ready to write response ...
	2025/11/29 08:30:36 Ready to marshal response ...
	2025/11/29 08:30:36 Ready to write response ...
	2025/11/29 08:30:41 Ready to marshal response ...
	2025/11/29 08:30:41 Ready to write response ...
	2025/11/29 08:30:41 Ready to marshal response ...
	2025/11/29 08:30:41 Ready to write response ...
	2025/11/29 08:30:45 Ready to marshal response ...
	2025/11/29 08:30:45 Ready to write response ...
	2025/11/29 08:30:48 Ready to marshal response ...
	2025/11/29 08:30:48 Ready to write response ...
	2025/11/29 08:31:07 Ready to marshal response ...
	2025/11/29 08:31:07 Ready to write response ...
	2025/11/29 08:31:23 Ready to marshal response ...
	2025/11/29 08:31:23 Ready to write response ...
	2025/11/29 08:32:59 Ready to marshal response ...
	2025/11/29 08:32:59 Ready to write response ...
	
	
	==> kernel <==
	 08:33:01 up 15 min,  0 user,  load average: 0.28, 0.62, 0.32
	Linux addons-053273 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8] <==
	I1129 08:30:56.885170       1 main.go:301] handling current node
	I1129 08:31:06.885968       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:31:06.886007       1 main.go:301] handling current node
	I1129 08:31:16.885226       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:31:16.885256       1 main.go:301] handling current node
	I1129 08:31:26.882811       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:31:26.882852       1 main.go:301] handling current node
	I1129 08:31:36.882349       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:31:36.882393       1 main.go:301] handling current node
	I1129 08:31:46.884947       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:31:46.884974       1 main.go:301] handling current node
	I1129 08:31:56.883334       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:31:56.883416       1 main.go:301] handling current node
	I1129 08:32:06.889982       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:32:06.890017       1 main.go:301] handling current node
	I1129 08:32:16.889393       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:32:16.889424       1 main.go:301] handling current node
	I1129 08:32:26.883602       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:32:26.883645       1 main.go:301] handling current node
	I1129 08:32:36.890690       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:32:36.890728       1 main.go:301] handling current node
	I1129 08:32:46.882132       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:32:46.882182       1 main.go:301] handling current node
	I1129 08:32:56.885727       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:32:56.885760       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677] <==
	E1129 08:29:50.472246       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.45.99:443: connect: connection refused" logger="UnhandledError"
	E1129 08:29:50.477700       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.45.99:443: connect: connection refused" logger="UnhandledError"
	E1129 08:29:50.498880       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.45.99:443: connect: connection refused" logger="UnhandledError"
	W1129 08:29:51.473944       1 handler_proxy.go:99] no RequestInfo found in the context
	W1129 08:29:51.473963       1 handler_proxy.go:99] no RequestInfo found in the context
	E1129 08:29:51.473989       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1129 08:29:51.474004       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1129 08:29:51.474041       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1129 08:29:51.475155       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1129 08:29:55.544866       1 handler_proxy.go:99] no RequestInfo found in the context
	E1129 08:29:55.544941       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1129 08:29:55.544960       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1129 08:29:55.555322       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1129 08:30:35.322021       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55098: use of closed network connection
	E1129 08:30:35.466726       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55118: use of closed network connection
	I1129 08:30:35.974320       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1129 08:30:36.184531       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.246.158"}
	I1129 08:31:17.749500       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1129 08:32:59.807645       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.96.14"}
	
	
	==> kube-controller-manager [fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6] <==
	I1129 08:29:04.175415       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 08:29:04.175442       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 08:29:04.175464       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 08:29:04.175528       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1129 08:29:04.175538       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 08:29:04.175559       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 08:29:04.176467       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 08:29:04.177943       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1129 08:29:04.178010       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1129 08:29:04.178055       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 08:29:04.178062       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 08:29:04.178067       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1129 08:29:04.179110       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 08:29:04.180288       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 08:29:04.183987       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-053273" podCIDRs=["10.244.0.0/24"]
	I1129 08:29:04.196514       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1129 08:29:06.790328       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1129 08:29:34.183531       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1129 08:29:34.183647       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1129 08:29:34.183695       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1129 08:29:34.205679       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1129 08:29:34.209271       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1129 08:29:34.284542       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 08:29:34.309827       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 08:29:49.129719       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686] <==
	I1129 08:29:06.675750       1 server_linux.go:53] "Using iptables proxy"
	I1129 08:29:06.763457       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 08:29:06.864738       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 08:29:06.864772       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1129 08:29:06.864872       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 08:29:06.895686       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 08:29:06.895743       1 server_linux.go:132] "Using iptables Proxier"
	I1129 08:29:06.902470       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 08:29:06.907786       1 server.go:527] "Version info" version="v1.34.1"
	I1129 08:29:06.907827       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 08:29:06.909339       1 config.go:106] "Starting endpoint slice config controller"
	I1129 08:29:06.909369       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 08:29:06.909404       1 config.go:200] "Starting service config controller"
	I1129 08:29:06.909411       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 08:29:06.909428       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 08:29:06.909434       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 08:29:06.909483       1 config.go:309] "Starting node config controller"
	I1129 08:29:06.909498       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 08:29:07.009767       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 08:29:07.009778       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 08:29:07.009798       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 08:29:07.009811       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b] <==
	E1129 08:28:57.196714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 08:28:57.197714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 08:28:57.197826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 08:28:57.197859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 08:28:57.197935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 08:28:57.197939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 08:28:57.197981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 08:28:57.198017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 08:28:57.198072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 08:28:57.198088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 08:28:57.198254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 08:28:57.198290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 08:28:57.198522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 08:28:58.011712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 08:28:58.021640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 08:28:58.068996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 08:28:58.070860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 08:28:58.080864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 08:28:58.091802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 08:28:58.113231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 08:28:58.258776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 08:28:58.295912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 08:28:58.418323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1129 08:28:58.418489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1129 08:29:01.194108       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 08:31:24 addons-053273 kubelet[1261]: I1129 08:31:24.904788    1261 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=1.6840490510000001 podStartE2EDuration="1.904769549s" podCreationTimestamp="2025-11-29 08:31:23 +0000 UTC" firstStartedPulling="2025-11-29 08:31:23.652531753 +0000 UTC m=+144.420320648" lastFinishedPulling="2025-11-29 08:31:23.873252238 +0000 UTC m=+144.641041146" observedRunningTime="2025-11-29 08:31:24.903435656 +0000 UTC m=+145.671224575" watchObservedRunningTime="2025-11-29 08:31:24.904769549 +0000 UTC m=+145.672558465"
	Nov 29 08:31:26 addons-053273 kubelet[1261]: I1129 08:31:26.312269    1261 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zsxkb" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 08:31:30 addons-053273 kubelet[1261]: I1129 08:31:30.596587    1261 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/33bd8557-b04d-4539-a521-f9a80045390f-gcp-creds\") pod \"33bd8557-b04d-4539-a521-f9a80045390f\" (UID: \"33bd8557-b04d-4539-a521-f9a80045390f\") "
	Nov 29 08:31:30 addons-053273 kubelet[1261]: I1129 08:31:30.596672    1261 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33bd8557-b04d-4539-a521-f9a80045390f-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "33bd8557-b04d-4539-a521-f9a80045390f" (UID: "33bd8557-b04d-4539-a521-f9a80045390f"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 29 08:31:30 addons-053273 kubelet[1261]: I1129 08:31:30.596713    1261 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^c96570b5-ccfd-11f0-830b-52439bdd4d06\") pod \"33bd8557-b04d-4539-a521-f9a80045390f\" (UID: \"33bd8557-b04d-4539-a521-f9a80045390f\") "
	Nov 29 08:31:30 addons-053273 kubelet[1261]: I1129 08:31:30.596748    1261 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65m8x\" (UniqueName: \"kubernetes.io/projected/33bd8557-b04d-4539-a521-f9a80045390f-kube-api-access-65m8x\") pod \"33bd8557-b04d-4539-a521-f9a80045390f\" (UID: \"33bd8557-b04d-4539-a521-f9a80045390f\") "
	Nov 29 08:31:30 addons-053273 kubelet[1261]: I1129 08:31:30.596830    1261 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/33bd8557-b04d-4539-a521-f9a80045390f-gcp-creds\") on node \"addons-053273\" DevicePath \"\""
	Nov 29 08:31:30 addons-053273 kubelet[1261]: I1129 08:31:30.599202    1261 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33bd8557-b04d-4539-a521-f9a80045390f-kube-api-access-65m8x" (OuterVolumeSpecName: "kube-api-access-65m8x") pod "33bd8557-b04d-4539-a521-f9a80045390f" (UID: "33bd8557-b04d-4539-a521-f9a80045390f"). InnerVolumeSpecName "kube-api-access-65m8x". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 29 08:31:30 addons-053273 kubelet[1261]: I1129 08:31:30.599982    1261 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^c96570b5-ccfd-11f0-830b-52439bdd4d06" (OuterVolumeSpecName: "task-pv-storage") pod "33bd8557-b04d-4539-a521-f9a80045390f" (UID: "33bd8557-b04d-4539-a521-f9a80045390f"). InnerVolumeSpecName "pvc-96d2f0bf-3001-4a3d-858b-4e66d0ddd91f". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 29 08:31:30 addons-053273 kubelet[1261]: I1129 08:31:30.698053    1261 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-96d2f0bf-3001-4a3d-858b-4e66d0ddd91f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^c96570b5-ccfd-11f0-830b-52439bdd4d06\") on node \"addons-053273\" "
	Nov 29 08:31:30 addons-053273 kubelet[1261]: I1129 08:31:30.698096    1261 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-65m8x\" (UniqueName: \"kubernetes.io/projected/33bd8557-b04d-4539-a521-f9a80045390f-kube-api-access-65m8x\") on node \"addons-053273\" DevicePath \"\""
	Nov 29 08:31:30 addons-053273 kubelet[1261]: I1129 08:31:30.702418    1261 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-96d2f0bf-3001-4a3d-858b-4e66d0ddd91f" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^c96570b5-ccfd-11f0-830b-52439bdd4d06") on node "addons-053273"
	Nov 29 08:31:30 addons-053273 kubelet[1261]: I1129 08:31:30.799098    1261 reconciler_common.go:299] "Volume detached for volume \"pvc-96d2f0bf-3001-4a3d-858b-4e66d0ddd91f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^c96570b5-ccfd-11f0-830b-52439bdd4d06\") on node \"addons-053273\" DevicePath \"\""
	Nov 29 08:31:30 addons-053273 kubelet[1261]: I1129 08:31:30.915622    1261 scope.go:117] "RemoveContainer" containerID="146c669e3e1ae0648d5fb56dc4f0634c717472b835f3cb6d091c5da737507619"
	Nov 29 08:31:30 addons-053273 kubelet[1261]: I1129 08:31:30.926695    1261 scope.go:117] "RemoveContainer" containerID="146c669e3e1ae0648d5fb56dc4f0634c717472b835f3cb6d091c5da737507619"
	Nov 29 08:31:30 addons-053273 kubelet[1261]: E1129 08:31:30.927165    1261 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"146c669e3e1ae0648d5fb56dc4f0634c717472b835f3cb6d091c5da737507619\": container with ID starting with 146c669e3e1ae0648d5fb56dc4f0634c717472b835f3cb6d091c5da737507619 not found: ID does not exist" containerID="146c669e3e1ae0648d5fb56dc4f0634c717472b835f3cb6d091c5da737507619"
	Nov 29 08:31:30 addons-053273 kubelet[1261]: I1129 08:31:30.927211    1261 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"146c669e3e1ae0648d5fb56dc4f0634c717472b835f3cb6d091c5da737507619"} err="failed to get container status \"146c669e3e1ae0648d5fb56dc4f0634c717472b835f3cb6d091c5da737507619\": rpc error: code = NotFound desc = could not find container \"146c669e3e1ae0648d5fb56dc4f0634c717472b835f3cb6d091c5da737507619\": container with ID starting with 146c669e3e1ae0648d5fb56dc4f0634c717472b835f3cb6d091c5da737507619 not found: ID does not exist"
	Nov 29 08:31:31 addons-053273 kubelet[1261]: I1129 08:31:31.314946    1261 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33bd8557-b04d-4539-a521-f9a80045390f" path="/var/lib/kubelet/pods/33bd8557-b04d-4539-a521-f9a80045390f/volumes"
	Nov 29 08:31:38 addons-053273 kubelet[1261]: I1129 08:31:38.311781    1261 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-52bjw" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 08:32:33 addons-053273 kubelet[1261]: I1129 08:32:33.315207    1261 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-d5jts" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 08:32:41 addons-053273 kubelet[1261]: I1129 08:32:41.312123    1261 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-52bjw" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 08:32:44 addons-053273 kubelet[1261]: I1129 08:32:44.311331    1261 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zsxkb" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 08:32:59 addons-053273 kubelet[1261]: I1129 08:32:59.732711    1261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c1f74a2c-e756-4f98-ac95-0747f7c6987d-gcp-creds\") pod \"hello-world-app-5d498dc89-bdwzf\" (UID: \"c1f74a2c-e756-4f98-ac95-0747f7c6987d\") " pod="default/hello-world-app-5d498dc89-bdwzf"
	Nov 29 08:32:59 addons-053273 kubelet[1261]: I1129 08:32:59.732794    1261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzjjj\" (UniqueName: \"kubernetes.io/projected/c1f74a2c-e756-4f98-ac95-0747f7c6987d-kube-api-access-kzjjj\") pod \"hello-world-app-5d498dc89-bdwzf\" (UID: \"c1f74a2c-e756-4f98-ac95-0747f7c6987d\") " pod="default/hello-world-app-5d498dc89-bdwzf"
	Nov 29 08:33:01 addons-053273 kubelet[1261]: I1129 08:33:01.258997    1261 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-bdwzf" podStartSLOduration=1.873057383 podStartE2EDuration="2.258975831s" podCreationTimestamp="2025-11-29 08:32:59 +0000 UTC" firstStartedPulling="2025-11-29 08:33:00.057313191 +0000 UTC m=+240.825102099" lastFinishedPulling="2025-11-29 08:33:00.443231636 +0000 UTC m=+241.211020547" observedRunningTime="2025-11-29 08:33:01.258805103 +0000 UTC m=+242.026594019" watchObservedRunningTime="2025-11-29 08:33:01.258975831 +0000 UTC m=+242.026764747"
	
	
	==> storage-provisioner [f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989] <==
	W1129 08:32:36.293063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:38.296348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:38.300197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:40.302762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:40.306343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:42.309385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:42.313183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:44.317394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:44.321594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:46.324592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:46.329268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:48.332784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:48.336342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:50.338826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:50.342449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:52.344734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:52.348398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:54.351394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:54.355140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:56.358030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:56.362034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:58.365876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:32:58.369895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:33:00.372996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:33:00.377352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-053273 -n addons-053273
helpers_test.go:269: (dbg) Run:  kubectl --context addons-053273 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-mnxkr ingress-nginx-admission-patch-hhlkx
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-053273 describe pod ingress-nginx-admission-create-mnxkr ingress-nginx-admission-patch-hhlkx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-053273 describe pod ingress-nginx-admission-create-mnxkr ingress-nginx-admission-patch-hhlkx: exit status 1 (56.242669ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-mnxkr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-hhlkx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-053273 describe pod ingress-nginx-admission-create-mnxkr ingress-nginx-admission-patch-hhlkx: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (246.831379ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:33:02.370580   24881 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:33:02.370860   24881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:33:02.370870   24881 out.go:374] Setting ErrFile to fd 2...
	I1129 08:33:02.370873   24881 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:33:02.371083   24881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:33:02.371343   24881 mustload.go:66] Loading cluster: addons-053273
	I1129 08:33:02.371698   24881 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:33:02.371716   24881 addons.go:622] checking whether the cluster is paused
	I1129 08:33:02.371795   24881 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:33:02.371809   24881 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:33:02.372188   24881 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:33:02.392271   24881 ssh_runner.go:195] Run: systemctl --version
	I1129 08:33:02.392324   24881 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:33:02.409749   24881 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:33:02.509684   24881 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:33:02.509781   24881 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:33:02.538450   24881 cri.go:89] found id: "8026c36cf38b7fdb674df2a6d65c677c169135d15515b8264ddf60493330acdd"
	I1129 08:33:02.538474   24881 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:33:02.538478   24881 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:33:02.538482   24881 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:33:02.538485   24881 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:33:02.538490   24881 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:33:02.538495   24881 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:33:02.538499   24881 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:33:02.538503   24881 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:33:02.538511   24881 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:33:02.538516   24881 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:33:02.538520   24881 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:33:02.538525   24881 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:33:02.538535   24881 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:33:02.538538   24881 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:33:02.538546   24881 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:33:02.538551   24881 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:33:02.538557   24881 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:33:02.538559   24881 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:33:02.538562   24881 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:33:02.538565   24881 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:33:02.538568   24881 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:33:02.538570   24881 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:33:02.538573   24881 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:33:02.538577   24881 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:33:02.538584   24881 cri.go:89] found id: ""
	I1129 08:33:02.538640   24881 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:33:02.553502   24881 out.go:203] 
	W1129 08:33:02.554675   24881 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:33:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:33:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:33:02.554694   24881 out.go:285] * 
	* 
	W1129 08:33:02.557755   24881 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:33:02.559020   24881 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-053273 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 addons disable ingress --alsologtostderr -v=1: exit status 11 (244.304038ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:33:02.615816   24944 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:33:02.616002   24944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:33:02.616013   24944 out.go:374] Setting ErrFile to fd 2...
	I1129 08:33:02.616020   24944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:33:02.616205   24944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:33:02.616501   24944 mustload.go:66] Loading cluster: addons-053273
	I1129 08:33:02.616821   24944 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:33:02.616855   24944 addons.go:622] checking whether the cluster is paused
	I1129 08:33:02.616957   24944 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:33:02.616976   24944 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:33:02.617340   24944 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:33:02.635184   24944 ssh_runner.go:195] Run: systemctl --version
	I1129 08:33:02.635243   24944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:33:02.654684   24944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:33:02.755494   24944 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:33:02.755573   24944 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:33:02.784645   24944 cri.go:89] found id: "8026c36cf38b7fdb674df2a6d65c677c169135d15515b8264ddf60493330acdd"
	I1129 08:33:02.784668   24944 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:33:02.784673   24944 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:33:02.784678   24944 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:33:02.784682   24944 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:33:02.784688   24944 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:33:02.784693   24944 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:33:02.784696   24944 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:33:02.784700   24944 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:33:02.784714   24944 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:33:02.784719   24944 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:33:02.784724   24944 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:33:02.784729   24944 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:33:02.784734   24944 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:33:02.784740   24944 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:33:02.784750   24944 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:33:02.784759   24944 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:33:02.784765   24944 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:33:02.784769   24944 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:33:02.784773   24944 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:33:02.784782   24944 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:33:02.784787   24944 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:33:02.784795   24944 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:33:02.784800   24944 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:33:02.784805   24944 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:33:02.784811   24944 cri.go:89] found id: ""
	I1129 08:33:02.784873   24944 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:33:02.798589   24944 out.go:203] 
	W1129 08:33:02.799783   24944 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:33:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:33:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:33:02.799799   24944 out.go:285] * 
	* 
	W1129 08:33:02.802733   24944 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:33:02.803973   24944 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-053273 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (147.09s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-bcrxg" [7190e5b8-1fb3-4ad6-9415-ac0bc46e23cd] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003104376s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (245.708607ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:30:43.957311   20665 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:30:43.957440   20665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:43.957448   20665 out.go:374] Setting ErrFile to fd 2...
	I1129 08:30:43.957452   20665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:43.957639   20665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:30:43.957885   20665 mustload.go:66] Loading cluster: addons-053273
	I1129 08:30:43.958234   20665 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:43.958253   20665 addons.go:622] checking whether the cluster is paused
	I1129 08:30:43.958328   20665 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:43.958342   20665 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:30:43.958715   20665 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:30:43.978077   20665 ssh_runner.go:195] Run: systemctl --version
	I1129 08:30:43.978144   20665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:30:43.995523   20665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:30:44.095425   20665 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:30:44.095501   20665 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:30:44.125097   20665 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:30:44.125118   20665 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:30:44.125122   20665 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:30:44.125126   20665 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:30:44.125129   20665 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:30:44.125134   20665 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:30:44.125137   20665 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:30:44.125139   20665 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:30:44.125142   20665 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:30:44.125154   20665 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:30:44.125157   20665 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:30:44.125161   20665 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:30:44.125163   20665 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:30:44.125166   20665 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:30:44.125169   20665 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:30:44.125174   20665 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:30:44.125180   20665 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:30:44.125184   20665 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:30:44.125187   20665 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:30:44.125190   20665 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:30:44.125195   20665 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:30:44.125201   20665 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:30:44.125204   20665 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:30:44.125207   20665 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:30:44.125215   20665 cri.go:89] found id: ""
	I1129 08:30:44.125255   20665 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:30:44.139352   20665 out.go:203] 
	W1129 08:30:44.140318   20665 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:30:44.140338   20665 out.go:285] * 
	* 
	W1129 08:30:44.143217   20665 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:30:44.144339   20665 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-053273 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.714947ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-48dhj" [64d2fc70-f8ba-4c90-aae2-41bb30f04b8f] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002605274s
addons_test.go:463: (dbg) Run:  kubectl --context addons-053273 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (243.918782ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:30:40.836030   20460 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:30:40.836194   20460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:40.836204   20460 out.go:374] Setting ErrFile to fd 2...
	I1129 08:30:40.836209   20460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:40.836393   20460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:30:40.836649   20460 mustload.go:66] Loading cluster: addons-053273
	I1129 08:30:40.836959   20460 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:40.836977   20460 addons.go:622] checking whether the cluster is paused
	I1129 08:30:40.837066   20460 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:40.837088   20460 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:30:40.837477   20460 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:30:40.854809   20460 ssh_runner.go:195] Run: systemctl --version
	I1129 08:30:40.854888   20460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:30:40.872123   20460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:30:40.972486   20460 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:30:40.972587   20460 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:30:41.002927   20460 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:30:41.002957   20460 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:30:41.002961   20460 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:30:41.002964   20460 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:30:41.002967   20460 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:30:41.002970   20460 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:30:41.002973   20460 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:30:41.002976   20460 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:30:41.002978   20460 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:30:41.002984   20460 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:30:41.002987   20460 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:30:41.002990   20460 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:30:41.002992   20460 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:30:41.002995   20460 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:30:41.002998   20460 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:30:41.003005   20460 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:30:41.003010   20460 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:30:41.003015   20460 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:30:41.003018   20460 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:30:41.003021   20460 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:30:41.003026   20460 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:30:41.003029   20460 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:30:41.003031   20460 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:30:41.003034   20460 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:30:41.003037   20460 cri.go:89] found id: ""
	I1129 08:30:41.003074   20460 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:30:41.016585   20460 out.go:203] 
	W1129 08:30:41.017735   20460 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:30:41.017751   20460 out.go:285] * 
	* 
	W1129 08:30:41.020801   20460 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:30:41.022026   20460 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-053273 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1129 08:30:49.410418    9216 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1129 08:30:49.413891    9216 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1129 08:30:49.413915    9216 kapi.go:107] duration metric: took 3.514943ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.523879ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-053273 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-053273 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [40851530-e11d-4dae-9ef5-d1ea5bd4f084] Pending
helpers_test.go:352: "task-pv-pod" [40851530-e11d-4dae-9ef5-d1ea5bd4f084] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [40851530-e11d-4dae-9ef5-d1ea5bd4f084] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.002738727s
addons_test.go:572: (dbg) Run:  kubectl --context addons-053273 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-053273 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-053273 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-053273 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-053273 delete pod task-pv-pod: (1.076867753s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-053273 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-053273 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-053273 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [33bd8557-b04d-4539-a521-f9a80045390f] Pending
helpers_test.go:352: "task-pv-pod-restore" [33bd8557-b04d-4539-a521-f9a80045390f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [33bd8557-b04d-4539-a521-f9a80045390f] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003429505s
addons_test.go:614: (dbg) Run:  kubectl --context addons-053273 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-053273 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-053273 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (243.192226ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:31:31.309893   22866 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:31:31.310199   22866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:31:31.310209   22866 out.go:374] Setting ErrFile to fd 2...
	I1129 08:31:31.310213   22866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:31:31.310449   22866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:31:31.310744   22866 mustload.go:66] Loading cluster: addons-053273
	I1129 08:31:31.311124   22866 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:31:31.311145   22866 addons.go:622] checking whether the cluster is paused
	I1129 08:31:31.311254   22866 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:31:31.311277   22866 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:31:31.311655   22866 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:31:31.329328   22866 ssh_runner.go:195] Run: systemctl --version
	I1129 08:31:31.329388   22866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:31:31.346683   22866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:31:31.446461   22866 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:31:31.446534   22866 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:31:31.475515   22866 cri.go:89] found id: "8026c36cf38b7fdb674df2a6d65c677c169135d15515b8264ddf60493330acdd"
	I1129 08:31:31.475545   22866 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:31:31.475550   22866 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:31:31.475554   22866 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:31:31.475557   22866 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:31:31.475562   22866 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:31:31.475565   22866 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:31:31.475569   22866 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:31:31.475571   22866 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:31:31.475583   22866 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:31:31.475589   22866 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:31:31.475592   22866 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:31:31.475595   22866 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:31:31.475598   22866 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:31:31.475601   22866 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:31:31.475613   22866 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:31:31.475620   22866 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:31:31.475625   22866 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:31:31.475628   22866 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:31:31.475630   22866 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:31:31.475633   22866 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:31:31.475636   22866 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:31:31.475638   22866 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:31:31.475641   22866 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:31:31.475644   22866 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:31:31.475647   22866 cri.go:89] found id: ""
	I1129 08:31:31.475697   22866 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:31:31.489912   22866 out.go:203] 
	W1129 08:31:31.491163   22866 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:31:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:31:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:31:31.491184   22866 out.go:285] * 
	* 
	W1129 08:31:31.494248   22866 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:31:31.495495   22866 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-053273 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (248.305973ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:31:31.553058   22928 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:31:31.553225   22928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:31:31.553236   22928 out.go:374] Setting ErrFile to fd 2...
	I1129 08:31:31.553240   22928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:31:31.553506   22928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:31:31.553857   22928 mustload.go:66] Loading cluster: addons-053273
	I1129 08:31:31.554336   22928 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:31:31.554365   22928 addons.go:622] checking whether the cluster is paused
	I1129 08:31:31.554465   22928 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:31:31.554482   22928 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:31:31.554955   22928 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:31:31.574381   22928 ssh_runner.go:195] Run: systemctl --version
	I1129 08:31:31.574466   22928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:31:31.591893   22928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:31:31.695533   22928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:31:31.695610   22928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:31:31.724008   22928 cri.go:89] found id: "8026c36cf38b7fdb674df2a6d65c677c169135d15515b8264ddf60493330acdd"
	I1129 08:31:31.724026   22928 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:31:31.724031   22928 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:31:31.724035   22928 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:31:31.724038   22928 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:31:31.724044   22928 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:31:31.724047   22928 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:31:31.724050   22928 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:31:31.724053   22928 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:31:31.724058   22928 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:31:31.724062   22928 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:31:31.724072   22928 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:31:31.724075   22928 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:31:31.724078   22928 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:31:31.724081   22928 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:31:31.724104   22928 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:31:31.724110   22928 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:31:31.724113   22928 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:31:31.724116   22928 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:31:31.724119   22928 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:31:31.724124   22928 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:31:31.724127   22928 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:31:31.724130   22928 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:31:31.724133   22928 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:31:31.724136   22928 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:31:31.724139   22928 cri.go:89] found id: ""
	I1129 08:31:31.724173   22928 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:31:31.738190   22928 out.go:203] 
	W1129 08:31:31.739429   22928 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:31:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:31:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:31:31.739458   22928 out.go:285] * 
	* 
	W1129 08:31:31.742694   22928 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:31:31.743924   22928 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-053273 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (42.34s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-053273 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-053273 --alsologtostderr -v=1: exit status 11 (258.151929ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:30:35.780137   18795 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:30:35.780439   18795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:35.780450   18795 out.go:374] Setting ErrFile to fd 2...
	I1129 08:30:35.780454   18795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:35.780639   18795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:30:35.780923   18795 mustload.go:66] Loading cluster: addons-053273
	I1129 08:30:35.781222   18795 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:35.781238   18795 addons.go:622] checking whether the cluster is paused
	I1129 08:30:35.781316   18795 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:35.781331   18795 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:30:35.781792   18795 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:30:35.801378   18795 ssh_runner.go:195] Run: systemctl --version
	I1129 08:30:35.801432   18795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:30:35.819132   18795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:30:35.920355   18795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:30:35.920438   18795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:30:35.951648   18795 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:30:35.951678   18795 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:30:35.951683   18795 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:30:35.951687   18795 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:30:35.951693   18795 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:30:35.951697   18795 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:30:35.951702   18795 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:30:35.951706   18795 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:30:35.951711   18795 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:30:35.951718   18795 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:30:35.951721   18795 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:30:35.951725   18795 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:30:35.951738   18795 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:30:35.951741   18795 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:30:35.951744   18795 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:30:35.951750   18795 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:30:35.951753   18795 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:30:35.951757   18795 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:30:35.951760   18795 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:30:35.951762   18795 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:30:35.951765   18795 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:30:35.951768   18795 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:30:35.951771   18795 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:30:35.951773   18795 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:30:35.951776   18795 cri.go:89] found id: ""
	I1129 08:30:35.951813   18795 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:30:35.966576   18795 out.go:203] 
	W1129 08:30:35.967934   18795 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:30:35.967959   18795 out.go:285] * 
	* 
	W1129 08:30:35.971464   18795 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:30:35.972880   18795 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-053273 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-053273
helpers_test.go:243: (dbg) docker inspect addons-053273:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4ce9f94b88a0aeb78da65dd6c720d82dc410e5569bcb7062eeccb22a94b4aae5",
	        "Created": "2025-11-29T08:28:44.78754074Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11213,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T08:28:44.820243244Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/4ce9f94b88a0aeb78da65dd6c720d82dc410e5569bcb7062eeccb22a94b4aae5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4ce9f94b88a0aeb78da65dd6c720d82dc410e5569bcb7062eeccb22a94b4aae5/hostname",
	        "HostsPath": "/var/lib/docker/containers/4ce9f94b88a0aeb78da65dd6c720d82dc410e5569bcb7062eeccb22a94b4aae5/hosts",
	        "LogPath": "/var/lib/docker/containers/4ce9f94b88a0aeb78da65dd6c720d82dc410e5569bcb7062eeccb22a94b4aae5/4ce9f94b88a0aeb78da65dd6c720d82dc410e5569bcb7062eeccb22a94b4aae5-json.log",
	        "Name": "/addons-053273",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-053273:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-053273",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4ce9f94b88a0aeb78da65dd6c720d82dc410e5569bcb7062eeccb22a94b4aae5",
	                "LowerDir": "/var/lib/docker/overlay2/a68c3799c04cab13dc2b78294ec1c9cd7d65d892fed21ffba750fe0af0f4bdd8-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a68c3799c04cab13dc2b78294ec1c9cd7d65d892fed21ffba750fe0af0f4bdd8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a68c3799c04cab13dc2b78294ec1c9cd7d65d892fed21ffba750fe0af0f4bdd8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a68c3799c04cab13dc2b78294ec1c9cd7d65d892fed21ffba750fe0af0f4bdd8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-053273",
	                "Source": "/var/lib/docker/volumes/addons-053273/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-053273",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-053273",
	                "name.minikube.sigs.k8s.io": "addons-053273",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "495f5523f5e4c5107e4584b29c8e0886ddf4ef4026b78f557eee47317a6b4154",
	            "SandboxKey": "/var/run/docker/netns/495f5523f5e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-053273": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "895729fa4ef5de98467555d848cd10702b6938d0e0fc7bd88070035594bde18f",
	                    "EndpointID": "fd9f81e8719230000fcfe8444b5fc505a482fdeff1031c273f079b10ab30b766",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "ba:b7:63:f1:0c:d2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-053273",
	                        "4ce9f94b88a0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-053273 -n addons-053273
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-053273 logs -n 25: (1.283570655s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-557052 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-557052   │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-557052                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-557052   │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-557986 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-557986   │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-557986                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-557986   │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-557052                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-557052   │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-557986                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-557986   │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:28 UTC │
	│ start   │ --download-only -p download-docker-659543 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-659543 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	│ delete  │ -p download-docker-659543                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-659543 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:28 UTC │
	│ start   │ --download-only -p binary-mirror-932462 --alsologtostderr --binary-mirror http://127.0.0.1:42911 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-932462   │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	│ delete  │ -p binary-mirror-932462                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-932462   │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:28 UTC │
	│ addons  │ enable dashboard -p addons-053273                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-053273          │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-053273                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-053273          │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	│ start   │ -p addons-053273 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-053273          │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:30 UTC │
	│ addons  │ addons-053273 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-053273          │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ addons  │ addons-053273 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-053273          │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	│ addons  │ enable headlamp -p addons-053273 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-053273          │ jenkins │ v1.37.0 │ 29 Nov 25 08:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 08:28:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 08:28:20.821509   10554 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:28:20.821713   10554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:20.821721   10554 out.go:374] Setting ErrFile to fd 2...
	I1129 08:28:20.821725   10554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:20.821915   10554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:28:20.822394   10554 out.go:368] Setting JSON to false
	I1129 08:28:20.823165   10554 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":653,"bootTime":1764404248,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:28:20.823219   10554 start.go:143] virtualization: kvm guest
	I1129 08:28:20.825097   10554 out.go:179] * [addons-053273] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 08:28:20.826477   10554 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 08:28:20.826460   10554 notify.go:221] Checking for updates...
	I1129 08:28:20.829459   10554 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:28:20.830774   10554 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 08:28:20.831923   10554 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 08:28:20.833340   10554 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 08:28:20.834707   10554 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 08:28:20.835889   10554 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:28:20.858935   10554 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 08:28:20.859069   10554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:28:20.916658   10554 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-11-29 08:28:20.90676456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:28:20.916772   10554 docker.go:319] overlay module found
	I1129 08:28:20.918743   10554 out.go:179] * Using the docker driver based on user configuration
	I1129 08:28:20.920059   10554 start.go:309] selected driver: docker
	I1129 08:28:20.920073   10554 start.go:927] validating driver "docker" against <nil>
	I1129 08:28:20.920084   10554 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 08:28:20.920667   10554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:28:20.979112   10554 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-11-29 08:28:20.969700064 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:28:20.979267   10554 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 08:28:20.979468   10554 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 08:28:20.981378   10554 out.go:179] * Using Docker driver with root privileges
	I1129 08:28:20.982704   10554 cni.go:84] Creating CNI manager for ""
	I1129 08:28:20.982759   10554 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 08:28:20.982768   10554 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 08:28:20.982860   10554 start.go:353] cluster config:
	{Name:addons-053273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1129 08:28:20.984045   10554 out.go:179] * Starting "addons-053273" primary control-plane node in "addons-053273" cluster
	I1129 08:28:20.985007   10554 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 08:28:20.986123   10554 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 08:28:20.987769   10554 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 08:28:20.987796   10554 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 08:28:20.987805   10554 cache.go:65] Caching tarball of preloaded images
	I1129 08:28:20.987861   10554 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 08:28:20.987917   10554 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 08:28:20.987932   10554 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 08:28:20.988279   10554 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/config.json ...
	I1129 08:28:20.988307   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/config.json: {Name:mk28d0e1aea03b0eb123c81fc976b5dd98ac733e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:21.003731   10554 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1129 08:28:21.003890   10554 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1129 08:28:21.003916   10554 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1129 08:28:21.003922   10554 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1129 08:28:21.003929   10554 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1129 08:28:21.003935   10554 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1129 08:28:33.312079   10554 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1129 08:28:33.312119   10554 cache.go:243] Successfully downloaded all kic artifacts
	I1129 08:28:33.312171   10554 start.go:360] acquireMachinesLock for addons-053273: {Name:mkf4f6215d673a1a64758cf7cdbd392ebfc0d5ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 08:28:33.312278   10554 start.go:364] duration metric: took 85.285µs to acquireMachinesLock for "addons-053273"
	I1129 08:28:33.312318   10554 start.go:93] Provisioning new machine with config: &{Name:addons-053273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053273 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 08:28:33.312385   10554 start.go:125] createHost starting for "" (driver="docker")
	I1129 08:28:33.313967   10554 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1129 08:28:33.314191   10554 start.go:159] libmachine.API.Create for "addons-053273" (driver="docker")
	I1129 08:28:33.314226   10554 client.go:173] LocalClient.Create starting
	I1129 08:28:33.314369   10554 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem
	I1129 08:28:33.453160   10554 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem
	I1129 08:28:33.585683   10554 cli_runner.go:164] Run: docker network inspect addons-053273 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 08:28:33.602536   10554 cli_runner.go:211] docker network inspect addons-053273 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 08:28:33.602617   10554 network_create.go:284] running [docker network inspect addons-053273] to gather additional debugging logs...
	I1129 08:28:33.602635   10554 cli_runner.go:164] Run: docker network inspect addons-053273
	W1129 08:28:33.618286   10554 cli_runner.go:211] docker network inspect addons-053273 returned with exit code 1
	I1129 08:28:33.618312   10554 network_create.go:287] error running [docker network inspect addons-053273]: docker network inspect addons-053273: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-053273 not found
	I1129 08:28:33.618323   10554 network_create.go:289] output of [docker network inspect addons-053273]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-053273 not found
	
	** /stderr **
	I1129 08:28:33.618398   10554 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 08:28:33.635090   10554 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cfee50}
	I1129 08:28:33.635130   10554 network_create.go:124] attempt to create docker network addons-053273 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1129 08:28:33.635191   10554 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-053273 addons-053273
	I1129 08:28:33.679674   10554 network_create.go:108] docker network addons-053273 192.168.49.0/24 created
	I1129 08:28:33.679704   10554 kic.go:121] calculated static IP "192.168.49.2" for the "addons-053273" container
	I1129 08:28:33.679770   10554 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 08:28:33.695770   10554 cli_runner.go:164] Run: docker volume create addons-053273 --label name.minikube.sigs.k8s.io=addons-053273 --label created_by.minikube.sigs.k8s.io=true
	I1129 08:28:33.713722   10554 oci.go:103] Successfully created a docker volume addons-053273
	I1129 08:28:33.713803   10554 cli_runner.go:164] Run: docker run --rm --name addons-053273-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-053273 --entrypoint /usr/bin/test -v addons-053273:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 08:28:40.364715   10554 cli_runner.go:217] Completed: docker run --rm --name addons-053273-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-053273 --entrypoint /usr/bin/test -v addons-053273:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (6.650866865s)
	I1129 08:28:40.364743   10554 oci.go:107] Successfully prepared a docker volume addons-053273
	I1129 08:28:40.364797   10554 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 08:28:40.364808   10554 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 08:28:40.364882   10554 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-053273:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1129 08:28:44.715871   10554 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-053273:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.350931606s)
	I1129 08:28:44.715905   10554 kic.go:203] duration metric: took 4.35109227s to extract preloaded images to volume ...
	W1129 08:28:44.716022   10554 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 08:28:44.716077   10554 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 08:28:44.716118   10554 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 08:28:44.771751   10554 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-053273 --name addons-053273 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-053273 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-053273 --network addons-053273 --ip 192.168.49.2 --volume addons-053273:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 08:28:45.060386   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Running}}
	I1129 08:28:45.079989   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:28:45.097634   10554 cli_runner.go:164] Run: docker exec addons-053273 stat /var/lib/dpkg/alternatives/iptables
	I1129 08:28:45.140652   10554 oci.go:144] the created container "addons-053273" has a running status.
	I1129 08:28:45.140682   10554 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa...
	I1129 08:28:45.243853   10554 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 08:28:45.267041   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:28:45.285800   10554 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 08:28:45.285820   10554 kic_runner.go:114] Args: [docker exec --privileged addons-053273 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 08:28:45.331899   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:28:45.352996   10554 machine.go:94] provisionDockerMachine start ...
	I1129 08:28:45.353123   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:45.378611   10554 main.go:143] libmachine: Using SSH client type: native
	I1129 08:28:45.378969   10554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1129 08:28:45.378989   10554 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 08:28:45.380475   10554 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34070->127.0.0.1:32768: read: connection reset by peer
	I1129 08:28:48.522703   10554 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-053273
	
	I1129 08:28:48.522733   10554 ubuntu.go:182] provisioning hostname "addons-053273"
	I1129 08:28:48.522788   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:48.540905   10554 main.go:143] libmachine: Using SSH client type: native
	I1129 08:28:48.541139   10554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1129 08:28:48.541151   10554 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-053273 && echo "addons-053273" | sudo tee /etc/hostname
	I1129 08:28:48.691085   10554 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-053273
	
	I1129 08:28:48.691156   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:48.708799   10554 main.go:143] libmachine: Using SSH client type: native
	I1129 08:28:48.709131   10554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1129 08:28:48.709161   10554 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-053273' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-053273/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-053273' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 08:28:48.851437   10554 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 08:28:48.851464   10554 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 08:28:48.851501   10554 ubuntu.go:190] setting up certificates
	I1129 08:28:48.851512   10554 provision.go:84] configureAuth start
	I1129 08:28:48.851562   10554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-053273
	I1129 08:28:48.868620   10554 provision.go:143] copyHostCerts
	I1129 08:28:48.868704   10554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 08:28:48.868891   10554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 08:28:48.868984   10554 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 08:28:48.869137   10554 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.addons-053273 san=[127.0.0.1 192.168.49.2 addons-053273 localhost minikube]
	I1129 08:28:48.913132   10554 provision.go:177] copyRemoteCerts
	I1129 08:28:48.913185   10554 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 08:28:48.913218   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:48.930309   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:28:49.030912   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1129 08:28:49.049850   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 08:28:49.066611   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 08:28:49.082965   10554 provision.go:87] duration metric: took 231.433651ms to configureAuth
	I1129 08:28:49.082998   10554 ubuntu.go:206] setting minikube options for container-runtime
	I1129 08:28:49.083158   10554 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:28:49.083260   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:49.103088   10554 main.go:143] libmachine: Using SSH client type: native
	I1129 08:28:49.103334   10554 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1129 08:28:49.103351   10554 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 08:28:49.381146   10554 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 08:28:49.381173   10554 machine.go:97] duration metric: took 4.028143818s to provisionDockerMachine
	I1129 08:28:49.381184   10554 client.go:176] duration metric: took 16.066947847s to LocalClient.Create
	I1129 08:28:49.381202   10554 start.go:167] duration metric: took 16.067010557s to libmachine.API.Create "addons-053273"
	I1129 08:28:49.381212   10554 start.go:293] postStartSetup for "addons-053273" (driver="docker")
	I1129 08:28:49.381225   10554 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 08:28:49.381287   10554 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 08:28:49.381335   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:49.399032   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:28:49.500765   10554 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 08:28:49.504289   10554 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 08:28:49.504322   10554 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 08:28:49.504336   10554 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 08:28:49.504402   10554 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 08:28:49.504427   10554 start.go:296] duration metric: took 123.209021ms for postStartSetup
	I1129 08:28:49.504706   10554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-053273
	I1129 08:28:49.522259   10554 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/config.json ...
	I1129 08:28:49.522531   10554 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:28:49.522575   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:49.539766   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:28:49.636768   10554 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 08:28:49.641061   10554 start.go:128] duration metric: took 16.328660651s to createHost
	I1129 08:28:49.641087   10554 start.go:83] releasing machines lock for "addons-053273", held for 16.328784053s
	I1129 08:28:49.641149   10554 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-053273
	I1129 08:28:49.658233   10554 ssh_runner.go:195] Run: cat /version.json
	I1129 08:28:49.658277   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:49.658319   10554 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 08:28:49.658406   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:28:49.675498   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:28:49.675862   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:28:49.826148   10554 ssh_runner.go:195] Run: systemctl --version
	I1129 08:28:49.832959   10554 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 08:28:49.866790   10554 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 08:28:49.871459   10554 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 08:28:49.871523   10554 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 08:28:49.896917   10554 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 08:28:49.896944   10554 start.go:496] detecting cgroup driver to use...
	I1129 08:28:49.896978   10554 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 08:28:49.897013   10554 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 08:28:49.912036   10554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 08:28:49.924055   10554 docker.go:218] disabling cri-docker service (if available) ...
	I1129 08:28:49.924110   10554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 08:28:49.939596   10554 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 08:28:49.956279   10554 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 08:28:50.035928   10554 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 08:28:50.123054   10554 docker.go:234] disabling docker service ...
	I1129 08:28:50.123119   10554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 08:28:50.140502   10554 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 08:28:50.152509   10554 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 08:28:50.234277   10554 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 08:28:50.314764   10554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 08:28:50.326516   10554 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 08:28:50.339692   10554 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 08:28:50.339759   10554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:28:50.349400   10554 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 08:28:50.349465   10554 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:28:50.358001   10554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:28:50.366061   10554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:28:50.374031   10554 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 08:28:50.381767   10554 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:28:50.389873   10554 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:28:50.402539   10554 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:28:50.411589   10554 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 08:28:50.418610   10554 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1129 08:28:50.418654   10554 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1129 08:28:50.430731   10554 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 08:28:50.437994   10554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 08:28:50.511695   10554 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 08:28:50.642132   10554 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 08:28:50.642217   10554 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 08:28:50.646143   10554 start.go:564] Will wait 60s for crictl version
	I1129 08:28:50.646202   10554 ssh_runner.go:195] Run: which crictl
	I1129 08:28:50.649640   10554 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 08:28:50.674924   10554 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 08:28:50.675040   10554 ssh_runner.go:195] Run: crio --version
	I1129 08:28:50.702427   10554 ssh_runner.go:195] Run: crio --version
	I1129 08:28:50.732565   10554 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 08:28:50.733691   10554 cli_runner.go:164] Run: docker network inspect addons-053273 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 08:28:50.750739   10554 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1129 08:28:50.754893   10554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 08:28:50.765068   10554 kubeadm.go:884] updating cluster {Name:addons-053273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 08:28:50.765184   10554 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 08:28:50.765229   10554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 08:28:50.794304   10554 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 08:28:50.794325   10554 crio.go:433] Images already preloaded, skipping extraction
	I1129 08:28:50.794371   10554 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 08:28:50.818295   10554 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 08:28:50.818316   10554 cache_images.go:86] Images are preloaded, skipping loading
	I1129 08:28:50.818324   10554 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1129 08:28:50.818409   10554 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-053273 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-053273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 08:28:50.818491   10554 ssh_runner.go:195] Run: crio config
	I1129 08:28:50.862704   10554 cni.go:84] Creating CNI manager for ""
	I1129 08:28:50.862728   10554 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 08:28:50.862747   10554 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 08:28:50.862775   10554 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-053273 NodeName:addons-053273 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 08:28:50.862934   10554 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-053273"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 08:28:50.863005   10554 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 08:28:50.870833   10554 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 08:28:50.870894   10554 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 08:28:50.878633   10554 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1129 08:28:50.891014   10554 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 08:28:50.904964   10554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1129 08:28:50.917521   10554 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1129 08:28:50.921176   10554 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 08:28:50.930984   10554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 08:28:51.004324   10554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 08:28:51.027097   10554 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273 for IP: 192.168.49.2
	I1129 08:28:51.027128   10554 certs.go:195] generating shared ca certs ...
	I1129 08:28:51.027144   10554 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.027287   10554 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 08:28:51.059569   10554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt ...
	I1129 08:28:51.059600   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt: {Name:mkcd2e4cfe3c1a0a3009971ae94ce4a87857db91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.059803   10554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key ...
	I1129 08:28:51.059819   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key: {Name:mk5039828d29547a7908ecefaca5b82cb351479e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.059952   10554 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 08:28:51.119838   10554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt ...
	I1129 08:28:51.119877   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt: {Name:mk012dbed843d7e9f088d181608049b8a4fc2e95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.120053   10554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key ...
	I1129 08:28:51.120065   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key: {Name:mk7c0df1c05dee6415ee4f2bea55b60104c150ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.120138   10554 certs.go:257] generating profile certs ...
	I1129 08:28:51.120190   10554 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.key
	I1129 08:28:51.120204   10554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt with IP's: []
	I1129 08:28:51.195792   10554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt ...
	I1129 08:28:51.195820   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: {Name:mk167101f2910b47f332157b6a5bd07cf45e6250 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.195988   10554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.key ...
	I1129 08:28:51.195998   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.key: {Name:mkfb96b93aa2047d803c91a450fe6fb8ef3d646f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.196066   10554 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.key.c57785be
	I1129 08:28:51.196084   10554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.crt.c57785be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1129 08:28:51.308591   10554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.crt.c57785be ...
	I1129 08:28:51.308618   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.crt.c57785be: {Name:mkf68d9053a57ec2aaf85d62bf1fbb6d37a55220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.308769   10554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.key.c57785be ...
	I1129 08:28:51.308781   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.key.c57785be: {Name:mkfddb4836ad4620b1ec6797f886f8748b21e6dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.308859   10554 certs.go:382] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.crt.c57785be -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.crt
	I1129 08:28:51.308938   10554 certs.go:386] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.key.c57785be -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.key
	I1129 08:28:51.308985   10554 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.key
	I1129 08:28:51.309003   10554 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.crt with IP's: []
	I1129 08:28:51.425058   10554 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.crt ...
	I1129 08:28:51.425085   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.crt: {Name:mk941f9023d64e26d4e91ab2cb3799246cb4277e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.425284   10554 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.key ...
	I1129 08:28:51.425300   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.key: {Name:mkd49676b219b87226fd39b085adc976a585b9d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:28:51.425483   10554 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 08:28:51.425522   10554 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 08:28:51.425554   10554 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 08:28:51.425577   10554 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 08:28:51.426160   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 08:28:51.443317   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 08:28:51.459404   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 08:28:51.475477   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 08:28:51.491086   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1129 08:28:51.506554   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 08:28:51.522547   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 08:28:51.538393   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 08:28:51.554301   10554 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 08:28:51.572239   10554 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 08:28:51.583645   10554 ssh_runner.go:195] Run: openssl version
	I1129 08:28:51.589326   10554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 08:28:51.599116   10554 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 08:28:51.602465   10554 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 08:28:51.602512   10554 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 08:28:51.635397   10554 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 08:28:51.643818   10554 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 08:28:51.647303   10554 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 08:28:51.647353   10554 kubeadm.go:401] StartCluster: {Name:addons-053273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-053273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:28:51.647434   10554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:28:51.647499   10554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:28:51.672971   10554 cri.go:89] found id: ""
	I1129 08:28:51.673027   10554 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 08:28:51.680585   10554 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 08:28:51.687988   10554 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 08:28:51.688039   10554 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 08:28:51.695255   10554 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 08:28:51.695270   10554 kubeadm.go:158] found existing configuration files:
	
	I1129 08:28:51.695311   10554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 08:28:51.702322   10554 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 08:28:51.702370   10554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 08:28:51.709181   10554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 08:28:51.716224   10554 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 08:28:51.716275   10554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 08:28:51.723000   10554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 08:28:51.729897   10554 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 08:28:51.729946   10554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 08:28:51.736823   10554 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 08:28:51.743934   10554 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 08:28:51.744006   10554 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 08:28:51.751038   10554 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 08:28:51.787714   10554 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 08:28:51.787784   10554 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 08:28:51.808465   10554 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 08:28:51.808567   10554 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 08:28:51.808602   10554 kubeadm.go:319] OS: Linux
	I1129 08:28:51.808663   10554 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 08:28:51.808743   10554 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 08:28:51.808857   10554 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 08:28:51.808934   10554 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 08:28:51.808991   10554 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 08:28:51.809051   10554 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 08:28:51.809126   10554 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 08:28:51.809198   10554 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 08:28:51.862315   10554 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 08:28:51.862443   10554 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 08:28:51.862584   10554 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 08:28:51.869258   10554 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 08:28:51.871024   10554 out.go:252]   - Generating certificates and keys ...
	I1129 08:28:51.871110   10554 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 08:28:51.871169   10554 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 08:28:51.962439   10554 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 08:28:52.130390   10554 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 08:28:52.469971   10554 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 08:28:52.911037   10554 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 08:28:52.992053   10554 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 08:28:52.992197   10554 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-053273 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1129 08:28:53.196049   10554 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 08:28:53.196226   10554 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-053273 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1129 08:28:53.632437   10554 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 08:28:53.780766   10554 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 08:28:53.815258   10554 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 08:28:53.815318   10554 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 08:28:54.056010   10554 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 08:28:54.175491   10554 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 08:28:54.341324   10554 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 08:28:54.463568   10554 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 08:28:54.537104   10554 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 08:28:54.537504   10554 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 08:28:54.541329   10554 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 08:28:54.544925   10554 out.go:252]   - Booting up control plane ...
	I1129 08:28:54.545048   10554 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 08:28:54.545189   10554 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 08:28:54.545295   10554 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 08:28:54.557337   10554 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 08:28:54.557430   10554 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 08:28:54.564900   10554 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 08:28:54.565157   10554 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 08:28:54.565226   10554 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 08:28:54.664795   10554 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 08:28:54.664964   10554 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 08:28:55.166448   10554 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.790312ms
	I1129 08:28:55.169317   10554 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 08:28:55.169453   10554 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1129 08:28:55.169577   10554 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 08:28:55.169683   10554 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 08:28:56.173822   10554 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004405463s
	I1129 08:28:57.200575   10554 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.031242518s
	I1129 08:28:58.671238   10554 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501826625s
	I1129 08:28:58.680685   10554 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 08:28:58.689098   10554 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 08:28:58.696181   10554 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 08:28:58.696436   10554 kubeadm.go:319] [mark-control-plane] Marking the node addons-053273 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 08:28:58.703903   10554 kubeadm.go:319] [bootstrap-token] Using token: 4ugug3.583w0frhsqgeg0aj
	I1129 08:28:58.705341   10554 out.go:252]   - Configuring RBAC rules ...
	I1129 08:28:58.705467   10554 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 08:28:58.708088   10554 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 08:28:58.712542   10554 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 08:28:58.714657   10554 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 08:28:58.716716   10554 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 08:28:58.719491   10554 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 08:28:59.077239   10554 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 08:28:59.494442   10554 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 08:29:00.076815   10554 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 08:29:00.077597   10554 kubeadm.go:319] 
	I1129 08:29:00.077684   10554 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 08:29:00.077694   10554 kubeadm.go:319] 
	I1129 08:29:00.077778   10554 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 08:29:00.077788   10554 kubeadm.go:319] 
	I1129 08:29:00.077822   10554 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 08:29:00.077944   10554 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 08:29:00.078030   10554 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 08:29:00.078045   10554 kubeadm.go:319] 
	I1129 08:29:00.078109   10554 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 08:29:00.078116   10554 kubeadm.go:319] 
	I1129 08:29:00.078153   10554 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 08:29:00.078160   10554 kubeadm.go:319] 
	I1129 08:29:00.078201   10554 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 08:29:00.078265   10554 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 08:29:00.078327   10554 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 08:29:00.078333   10554 kubeadm.go:319] 
	I1129 08:29:00.078425   10554 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 08:29:00.078545   10554 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 08:29:00.078553   10554 kubeadm.go:319] 
	I1129 08:29:00.078622   10554 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 4ugug3.583w0frhsqgeg0aj \
	I1129 08:29:00.078729   10554 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 \
	I1129 08:29:00.078752   10554 kubeadm.go:319] 	--control-plane 
	I1129 08:29:00.078758   10554 kubeadm.go:319] 
	I1129 08:29:00.078917   10554 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 08:29:00.078928   10554 kubeadm.go:319] 
	I1129 08:29:00.079038   10554 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 4ugug3.583w0frhsqgeg0aj \
	I1129 08:29:00.079182   10554 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 
	I1129 08:29:00.080830   10554 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 08:29:00.081013   10554 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 08:29:00.081048   10554 cni.go:84] Creating CNI manager for ""
	I1129 08:29:00.081060   10554 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 08:29:00.083488   10554 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 08:29:00.084715   10554 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 08:29:00.088721   10554 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 08:29:00.088743   10554 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 08:29:00.101740   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 08:29:00.296361   10554 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 08:29:00.296453   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:00.296482   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-053273 minikube.k8s.io/updated_at=2025_11_29T08_29_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=addons-053273 minikube.k8s.io/primary=true
	I1129 08:29:00.305481   10554 ops.go:34] apiserver oom_adj: -16
	I1129 08:29:00.369815   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:00.870690   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:01.369985   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:01.870925   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:02.370453   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:02.870935   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:03.370376   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:03.870021   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:04.370789   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:04.869961   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:05.370495   10554 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:05.437850   10554 kubeadm.go:1114] duration metric: took 5.141457683s to wait for elevateKubeSystemPrivileges
	I1129 08:29:05.437892   10554 kubeadm.go:403] duration metric: took 13.790541822s to StartCluster
	I1129 08:29:05.437911   10554 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:05.438031   10554 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 08:29:05.438493   10554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:05.438709   10554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 08:29:05.438745   10554 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 08:29:05.438801   10554 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1129 08:29:05.438955   10554 addons.go:70] Setting ingress-dns=true in profile "addons-053273"
	I1129 08:29:05.438969   10554 addons.go:70] Setting metrics-server=true in profile "addons-053273"
	I1129 08:29:05.438976   10554 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:29:05.438991   10554 addons.go:70] Setting gcp-auth=true in profile "addons-053273"
	I1129 08:29:05.438993   10554 addons.go:70] Setting storage-provisioner=true in profile "addons-053273"
	I1129 08:29:05.439003   10554 addons.go:70] Setting volumesnapshots=true in profile "addons-053273"
	I1129 08:29:05.439009   10554 mustload.go:66] Loading cluster: addons-053273
	I1129 08:29:05.439014   10554 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-053273"
	I1129 08:29:05.438980   10554 addons.go:239] Setting addon ingress-dns=true in "addons-053273"
	I1129 08:29:05.439027   10554 addons.go:70] Setting registry-creds=true in profile "addons-053273"
	I1129 08:29:05.439017   10554 addons.go:70] Setting registry=true in profile "addons-053273"
	I1129 08:29:05.439018   10554 addons.go:239] Setting addon volumesnapshots=true in "addons-053273"
	I1129 08:29:05.439044   10554 addons.go:239] Setting addon registry-creds=true in "addons-053273"
	I1129 08:29:05.439060   10554 addons.go:239] Setting addon registry=true in "addons-053273"
	I1129 08:29:05.439068   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.439074   10554 addons.go:70] Setting ingress=true in profile "addons-053273"
	I1129 08:29:05.439080   10554 addons.go:70] Setting cloud-spanner=true in profile "addons-053273"
	I1129 08:29:05.439089   10554 addons.go:239] Setting addon ingress=true in "addons-053273"
	I1129 08:29:05.439094   10554 addons.go:239] Setting addon cloud-spanner=true in "addons-053273"
	I1129 08:29:05.439107   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.439110   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.439117   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.439123   10554 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-053273"
	I1129 08:29:05.439145   10554 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-053273"
	I1129 08:29:05.439173   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.439234   10554 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:29:05.439494   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439620   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439639   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439640   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439649   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439665   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439039   10554 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-053273"
	I1129 08:29:05.439891   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.440334   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.438986   10554 addons.go:70] Setting default-storageclass=true in profile "addons-053273"
	I1129 08:29:05.441059   10554 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-053273"
	I1129 08:29:05.441145   10554 out.go:179] * Verifying Kubernetes components...
	I1129 08:29:05.441356   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.438964   10554 addons.go:70] Setting inspektor-gadget=true in profile "addons-053273"
	I1129 08:29:05.441446   10554 addons.go:239] Setting addon inspektor-gadget=true in "addons-053273"
	I1129 08:29:05.441473   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.438993   10554 addons.go:70] Setting volcano=true in profile "addons-053273"
	I1129 08:29:05.441826   10554 addons.go:239] Setting addon volcano=true in "addons-053273"
	I1129 08:29:05.441865   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.438982   10554 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-053273"
	I1129 08:29:05.441927   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439075   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.442305   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.442363   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.439063   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.442412   10554 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-053273"
	I1129 08:29:05.442453   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.438980   10554 addons.go:239] Setting addon metrics-server=true in "addons-053273"
	I1129 08:29:05.442735   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.438985   10554 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-053273"
	I1129 08:29:05.443900   10554 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-053273"
	I1129 08:29:05.439020   10554 addons.go:239] Setting addon storage-provisioner=true in "addons-053273"
	I1129 08:29:05.444139   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.438956   10554 addons.go:70] Setting yakd=true in profile "addons-053273"
	I1129 08:29:05.444989   10554 addons.go:239] Setting addon yakd=true in "addons-053273"
	I1129 08:29:05.445020   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.445828   10554 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 08:29:05.451825   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.452065   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.452652   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.454082   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.451830   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.456426   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.503977   10554 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1129 08:29:05.505759   10554 addons.go:239] Setting addon default-storageclass=true in "addons-053273"
	I1129 08:29:05.505895   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.505798   10554 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1129 08:29:05.505956   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1129 08:29:05.506018   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.506271   10554 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1129 08:29:05.506519   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.508257   10554 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1129 08:29:05.509311   10554 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1129 08:29:05.510408   10554 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1129 08:29:05.510427   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1129 08:29:05.510479   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.511920   10554 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1129 08:29:05.512537   10554 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1129 08:29:05.513418   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.514571   10554 out.go:179]   - Using image docker.io/registry:3.0.0
	I1129 08:29:05.515597   10554 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1129 08:29:05.515611   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1129 08:29:05.515658   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.516103   10554 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1129 08:29:05.516116   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1129 08:29:05.516171   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.523811   10554 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1129 08:29:05.523812   10554 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 08:29:05.525828   10554 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1129 08:29:05.526825   10554 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 08:29:05.526862   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 08:29:05.526920   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.527327   10554 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1129 08:29:05.527990   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1129 08:29:05.527345   10554 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1129 08:29:05.528139   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.528091   10554 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1129 08:29:05.529540   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1129 08:29:05.530122   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.533473   10554 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1129 08:29:05.533560   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1129 08:29:05.533643   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.541458   10554 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-053273"
	I1129 08:29:05.541507   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:05.544114   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:05.557762   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1129 08:29:05.557762   10554 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	W1129 08:29:05.558424   10554 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1129 08:29:05.559884   10554 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1129 08:29:05.559902   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1129 08:29:05.560009   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.560322   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1129 08:29:05.561471   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1129 08:29:05.562473   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1129 08:29:05.563528   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1129 08:29:05.564593   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1129 08:29:05.564608   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1129 08:29:05.566110   10554 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1129 08:29:05.566131   10554 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1129 08:29:05.566205   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.567417   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1129 08:29:05.568467   10554 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1129 08:29:05.568707   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.569467   10554 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1129 08:29:05.569487   10554 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1129 08:29:05.569565   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.570616   10554 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1129 08:29:05.571587   10554 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1129 08:29:05.571613   10554 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1129 08:29:05.571671   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.573934   10554 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1129 08:29:05.575011   10554 out.go:179]   - Using image docker.io/busybox:stable
	I1129 08:29:05.577788   10554 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1129 08:29:05.577878   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1129 08:29:05.578023   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.581404   10554 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1129 08:29:05.582728   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.583602   10554 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 08:29:05.585737   10554 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1129 08:29:05.588073   10554 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1129 08:29:05.588151   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.589943   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.594978   10554 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 08:29:05.595003   10554 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 08:29:05.595094   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:05.602810   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.614889   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.615821   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.616835   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.622412   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.629753   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.633870   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.635325   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.635406   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	W1129 08:29:05.644679   10554 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1129 08:29:05.644739   10554 retry.go:31] will retry after 350.202563ms: ssh: handshake failed: EOF
	I1129 08:29:05.649787   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.658938   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.661783   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:05.663821   10554 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 08:29:05.746438   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1129 08:29:05.759672   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1129 08:29:05.769985   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1129 08:29:05.774242   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 08:29:05.791653   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1129 08:29:05.794680   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1129 08:29:05.810061   10554 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1129 08:29:05.810087   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1129 08:29:05.814473   10554 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1129 08:29:05.814495   10554 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1129 08:29:05.817964   10554 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1129 08:29:05.817984   10554 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1129 08:29:05.820754   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1129 08:29:05.826647   10554 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1129 08:29:05.826676   10554 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1129 08:29:05.831091   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1129 08:29:05.834280   10554 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1129 08:29:05.834306   10554 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1129 08:29:05.836739   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 08:29:05.849129   10554 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1129 08:29:05.849156   10554 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1129 08:29:05.870783   10554 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1129 08:29:05.870815   10554 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1129 08:29:05.878502   10554 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1129 08:29:05.878533   10554 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1129 08:29:05.897317   10554 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1129 08:29:05.897351   10554 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1129 08:29:05.897367   10554 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1129 08:29:05.897381   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1129 08:29:05.897317   10554 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1129 08:29:05.897443   10554 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1129 08:29:05.912654   10554 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1129 08:29:05.912686   10554 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1129 08:29:05.943881   10554 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1129 08:29:05.943907   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1129 08:29:05.949385   10554 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 08:29:05.949418   10554 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1129 08:29:05.953139   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1129 08:29:05.959008   10554 node_ready.go:35] waiting up to 6m0s for node "addons-053273" to be "Ready" ...
	I1129 08:29:05.959287   10554 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1129 08:29:05.972196   10554 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1129 08:29:05.972219   10554 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1129 08:29:05.977651   10554 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1129 08:29:05.977674   10554 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1129 08:29:05.987915   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1129 08:29:06.000555   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 08:29:06.041722   10554 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1129 08:29:06.041751   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1129 08:29:06.051406   10554 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1129 08:29:06.051435   10554 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1129 08:29:06.118312   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1129 08:29:06.139356   10554 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1129 08:29:06.139458   10554 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1129 08:29:06.185145   10554 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1129 08:29:06.185174   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1129 08:29:06.227236   10554 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1129 08:29:06.227264   10554 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1129 08:29:06.284699   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1129 08:29:06.296447   10554 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1129 08:29:06.296473   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1129 08:29:06.354475   10554 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1129 08:29:06.354497   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1129 08:29:06.406806   10554 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1129 08:29:06.406830   10554 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1129 08:29:06.442173   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1129 08:29:06.469381   10554 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-053273" context rescaled to 1 replicas
	I1129 08:29:06.943457   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.196981726s)
	I1129 08:29:06.943507   10554 addons.go:495] Verifying addon ingress=true in "addons-053273"
	I1129 08:29:06.943531   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.183831201s)
	I1129 08:29:06.943651   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.169384355s)
	I1129 08:29:06.943603   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.173596495s)
	I1129 08:29:06.943747   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.152066832s)
	I1129 08:29:06.943879   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.149165517s)
	I1129 08:29:06.943929   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.123143081s)
	I1129 08:29:06.943984   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.112867068s)
	I1129 08:29:06.944018   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.107218724s)
	I1129 08:29:06.944095   10554 addons.go:495] Verifying addon registry=true in "addons-053273"
	I1129 08:29:06.944242   10554 addons.go:495] Verifying addon metrics-server=true in "addons-053273"
	I1129 08:29:06.948742   10554 out.go:179] * Verifying registry addon...
	I1129 08:29:06.948801   10554 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-053273 service yakd-dashboard -n yakd-dashboard
	
	I1129 08:29:06.948748   10554 out.go:179] * Verifying ingress addon...
	W1129 08:29:06.951109   10554 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1129 08:29:06.951206   10554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1129 08:29:06.951208   10554 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1129 08:29:06.953623   10554 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1129 08:29:06.953640   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:06.954641   10554 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1129 08:29:07.453110   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.334680035s)
	W1129 08:29:07.453181   10554 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1129 08:29:07.453207   10554 retry.go:31] will retry after 334.67341ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1129 08:29:07.453259   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.168472576s)
	I1129 08:29:07.453486   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.011223981s)
	I1129 08:29:07.453507   10554 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-053273"
	I1129 08:29:07.457984   10554 out.go:179] * Verifying csi-hostpath-driver addon...
	I1129 08:29:07.460427   10554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1129 08:29:07.463441   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:07.464255   10554 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1129 08:29:07.464276   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:07.464541   10554 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1129 08:29:07.464563   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:07.788757   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1129 08:29:07.954138   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:07.954254   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:07.961425   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:07.963289   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:08.454957   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:08.455145   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:08.463202   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:08.954136   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:08.954304   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:08.962764   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:09.454149   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:09.454353   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:09.462920   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:09.954027   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:09.954192   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:09.962718   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:10.256535   10554 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.467738069s)
	I1129 08:29:10.454312   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:10.454521   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:10.461455   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:10.462420   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:10.954610   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:10.954920   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:10.964632   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:11.455101   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:11.455240   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:11.462668   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:11.954282   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:11.954501   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:11.962440   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:12.454706   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:12.454789   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:12.461701   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:12.462783   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:12.954962   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:12.955181   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:12.963078   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:13.119765   10554 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1129 08:29:13.119832   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:13.137589   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:13.244319   10554 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1129 08:29:13.257023   10554 addons.go:239] Setting addon gcp-auth=true in "addons-053273"
	I1129 08:29:13.257077   10554 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:29:13.257404   10554 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:29:13.274724   10554 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1129 08:29:13.274769   10554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:29:13.292998   10554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:29:13.392686   10554 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1129 08:29:13.393860   10554 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1129 08:29:13.394923   10554 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1129 08:29:13.394941   10554 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1129 08:29:13.408336   10554 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1129 08:29:13.408364   10554 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1129 08:29:13.420912   10554 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1129 08:29:13.420936   10554 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1129 08:29:13.434434   10554 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1129 08:29:13.454465   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:13.454540   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:13.462516   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:13.730905   10554 addons.go:495] Verifying addon gcp-auth=true in "addons-053273"
	I1129 08:29:13.732188   10554 out.go:179] * Verifying gcp-auth addon...
	I1129 08:29:13.734155   10554 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1129 08:29:13.736508   10554 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1129 08:29:13.736524   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:13.954210   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:13.954476   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:13.962462   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:14.237153   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:14.454075   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:14.454123   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:14.461952   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:14.462896   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:14.737861   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:14.954569   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:14.954749   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:14.962760   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:15.237416   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:15.454011   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:15.454023   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:15.463073   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:15.736882   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:15.954644   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:15.954786   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:15.962704   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:16.237420   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:16.454585   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:16.454641   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:16.462657   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:16.737460   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:16.954081   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:16.954126   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:16.962416   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:16.963233   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:17.237695   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:17.454487   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:17.454583   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:17.462565   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:17.737797   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:17.954529   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:17.954593   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:17.962426   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:18.237077   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:18.455212   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:18.455288   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:18.462646   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:18.737693   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:18.954367   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:18.954561   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:18.962386   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:19.237308   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:19.453719   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:19.453917   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:19.461694   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:19.462733   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:19.737697   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:19.954516   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:19.954650   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:19.962462   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:20.237521   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:20.454569   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:20.454647   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:20.462834   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:20.737578   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:20.954496   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:20.954702   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:20.962586   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:21.237350   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:21.453937   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:21.454084   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:21.462011   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:21.463010   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:21.737626   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:21.954485   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:21.954599   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:21.962493   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:22.237334   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:22.454351   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:22.454364   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:22.462544   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:22.737549   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:22.954399   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:22.954418   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:22.963196   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:23.236616   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:23.454305   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:23.454310   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:23.462970   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:23.737757   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:23.954582   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:23.954634   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:23.961651   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:23.962767   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:24.237567   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:24.454535   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:24.454635   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:24.462787   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:24.737725   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:24.954481   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:24.954571   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:24.963587   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:25.236711   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:25.454445   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:25.454556   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:25.462601   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:25.737459   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:25.954397   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:25.954628   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:25.962729   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:26.237631   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:26.454856   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:26.454871   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:26.461667   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:26.462620   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:26.737327   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:26.953873   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:26.953997   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:26.962827   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:27.237556   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:27.454353   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:27.454404   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:27.462598   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:27.737563   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:27.954194   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:27.954242   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:27.962571   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:28.237303   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:28.454006   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:28.454024   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:28.461946   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:28.462984   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:28.737957   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:28.954874   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:28.954911   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:28.962754   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:29.237445   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:29.454026   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:29.454164   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:29.462289   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:29.737240   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:29.953679   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:29.953735   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:29.962621   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:30.237606   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:30.454615   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:30.454834   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:30.462686   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:30.737221   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:30.953914   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:30.954077   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:30.961857   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:30.962788   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:31.237444   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:31.454277   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:31.454284   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:31.462326   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:31.736705   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:31.954375   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:31.954525   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:31.962466   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:32.237298   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:32.454036   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:32.454112   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:32.462931   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:32.737619   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:32.954355   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:32.954407   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:32.962452   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:33.237145   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:33.454821   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:33.454926   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:33.461901   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:33.462835   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:33.737526   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:33.954282   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:33.954382   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:33.962736   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:34.237193   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:34.453975   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:34.454053   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:34.462930   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:34.737551   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:34.954339   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:34.954431   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:34.962291   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:35.237090   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:35.454635   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:35.454757   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:35.462827   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:35.737748   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:35.954435   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:35.954600   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:35.961422   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:35.962458   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:36.237081   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:36.454478   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:36.454577   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:36.462761   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:36.737240   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:36.954058   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:36.954162   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:36.963208   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:37.237017   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:37.454539   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:37.454593   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:37.462338   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:37.736977   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:37.954874   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:37.954923   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:37.961792   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:37.962756   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:38.237445   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:38.454204   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:38.454251   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:38.463039   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:38.736967   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:38.954343   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:38.954577   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:38.962512   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:39.237038   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:39.455029   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:39.455111   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:39.462790   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:39.737724   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:39.954374   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:39.954403   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:39.962987   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:40.237600   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:40.454356   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:40.454536   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:40.461669   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:40.462560   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:40.737449   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:40.954375   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:40.954449   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:40.962241   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:41.237064   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:41.454965   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:41.454965   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:41.462691   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:41.737253   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:41.953853   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:41.953960   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:41.962936   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:42.237456   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:42.454258   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:42.454435   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:42.462320   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:42.736637   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:42.954415   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:42.954436   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:42.961598   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:42.962478   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:43.237369   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:43.453868   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:43.454110   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:43.462867   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:43.737452   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:43.953959   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:43.954050   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:43.963005   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:44.236499   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:44.454188   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:44.454364   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:44.462600   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:44.737175   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:44.953965   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:44.954058   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1129 08:29:44.961955   10554 node_ready.go:57] node "addons-053273" has "Ready":"False" status (will retry)
	I1129 08:29:44.962930   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:45.237686   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:45.454415   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:45.454484   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:45.462280   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:45.737001   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:45.953792   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:45.953959   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:45.962810   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:46.237614   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:46.454249   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:46.454385   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:46.462961   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:46.737606   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:46.954265   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:46.954413   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:46.963159   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:47.237348   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:47.454176   10554 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1129 08:29:47.454202   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:47.454408   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:47.460860   10554 node_ready.go:49] node "addons-053273" is "Ready"
	I1129 08:29:47.460881   10554 node_ready.go:38] duration metric: took 41.501824884s for node "addons-053273" to be "Ready" ...
	I1129 08:29:47.460895   10554 api_server.go:52] waiting for apiserver process to appear ...
	I1129 08:29:47.460939   10554 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 08:29:47.462595   10554 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1129 08:29:47.462614   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:47.477301   10554 api_server.go:72] duration metric: took 42.038517373s to wait for apiserver process to appear ...
	I1129 08:29:47.477329   10554 api_server.go:88] waiting for apiserver healthz status ...
	I1129 08:29:47.477350   10554 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1129 08:29:47.482430   10554 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1129 08:29:47.483413   10554 api_server.go:141] control plane version: v1.34.1
	I1129 08:29:47.483445   10554 api_server.go:131] duration metric: took 6.109655ms to wait for apiserver health ...
	I1129 08:29:47.483458   10554 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 08:29:47.555729   10554 system_pods.go:59] 20 kube-system pods found
	I1129 08:29:47.555771   10554 system_pods.go:61] "amd-gpu-device-plugin-d5jts" [d49cc084-87de-4151-bd07-7d32d21a3754] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1129 08:29:47.555787   10554 system_pods.go:61] "coredns-66bc5c9577-kpln4" [0c815eba-8b6a-47d4-8b05-a715b3dcd17a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 08:29:47.555799   10554 system_pods.go:61] "csi-hostpath-attacher-0" [1f391ce2-abb9-4600-8971-d31b368252f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 08:29:47.555808   10554 system_pods.go:61] "csi-hostpath-resizer-0" [945a92c3-b309-4830-a893-b5cd9c3ae0d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 08:29:47.555817   10554 system_pods.go:61] "csi-hostpathplugin-rvvrd" [813a302e-fd2c-452f-be53-e9bdf6ee6f60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 08:29:47.555853   10554 system_pods.go:61] "etcd-addons-053273" [86ef8f9c-32d8-4877-b010-6a88d85de53c] Running
	I1129 08:29:47.555868   10554 system_pods.go:61] "kindnet-xqwm5" [e96900d3-e678-4123-a26a-9924fdc05772] Running
	I1129 08:29:47.555874   10554 system_pods.go:61] "kube-apiserver-addons-053273" [f51f767c-fffd-4e64-b566-b1a6123060a9] Running
	I1129 08:29:47.555883   10554 system_pods.go:61] "kube-controller-manager-addons-053273" [4d4cffd1-c5e0-4542-842c-7db6cb701e0b] Running
	I1129 08:29:47.555892   10554 system_pods.go:61] "kube-ingress-dns-minikube" [2acc9ff6-35bf-4f93-b76a-4e02d6a36cf8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 08:29:47.555901   10554 system_pods.go:61] "kube-proxy-2gkdk" [fd0daf23-4091-4668-9729-627e0356bc5b] Running
	I1129 08:29:47.555907   10554 system_pods.go:61] "kube-scheduler-addons-053273" [7aed405a-86cf-411c-a827-b79a6935d5f0] Running
	I1129 08:29:47.555917   10554 system_pods.go:61] "metrics-server-85b7d694d7-48dhj" [64d2fc70-f8ba-4c90-aae2-41bb30f04b8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 08:29:47.555929   10554 system_pods.go:61] "nvidia-device-plugin-daemonset-52bjw" [111c52a4-32cd-4beb-a0ed-11bcc2e5bf21] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 08:29:47.555939   10554 system_pods.go:61] "registry-6b586f9694-gt598" [43f7eae2-b891-44d2-80dc-8650bee1c9d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 08:29:47.555950   10554 system_pods.go:61] "registry-creds-764b6fb674-ktw8b" [e715e721-8143-4728-b353-67ec7cddd186] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 08:29:47.555967   10554 system_pods.go:61] "registry-proxy-zsxkb" [417acc75-98bb-45c2-a648-a0e941620c8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 08:29:47.555979   10554 system_pods.go:61] "snapshot-controller-7d9fbc56b8-lrhxm" [9268b5e7-d8c2-49ed-8980-a63d87cecb6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:47.555992   10554 system_pods.go:61] "snapshot-controller-7d9fbc56b8-q48mh" [a4b61b82-5f2f-41e3-98a2-7b12080d1a1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:47.556004   10554 system_pods.go:61] "storage-provisioner" [42b3498b-6992-4f91-b7bd-bd29a41526d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 08:29:47.556015   10554 system_pods.go:74] duration metric: took 72.550166ms to wait for pod list to return data ...
	I1129 08:29:47.556030   10554 default_sa.go:34] waiting for default service account to be created ...
	I1129 08:29:47.558468   10554 default_sa.go:45] found service account: "default"
	I1129 08:29:47.558502   10554 default_sa.go:55] duration metric: took 2.452748ms for default service account to be created ...
	I1129 08:29:47.558511   10554 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 08:29:47.562454   10554 system_pods.go:86] 20 kube-system pods found
	I1129 08:29:47.562494   10554 system_pods.go:89] "amd-gpu-device-plugin-d5jts" [d49cc084-87de-4151-bd07-7d32d21a3754] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1129 08:29:47.562505   10554 system_pods.go:89] "coredns-66bc5c9577-kpln4" [0c815eba-8b6a-47d4-8b05-a715b3dcd17a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 08:29:47.562516   10554 system_pods.go:89] "csi-hostpath-attacher-0" [1f391ce2-abb9-4600-8971-d31b368252f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 08:29:47.562525   10554 system_pods.go:89] "csi-hostpath-resizer-0" [945a92c3-b309-4830-a893-b5cd9c3ae0d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 08:29:47.562535   10554 system_pods.go:89] "csi-hostpathplugin-rvvrd" [813a302e-fd2c-452f-be53-e9bdf6ee6f60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 08:29:47.562541   10554 system_pods.go:89] "etcd-addons-053273" [86ef8f9c-32d8-4877-b010-6a88d85de53c] Running
	I1129 08:29:47.562566   10554 system_pods.go:89] "kindnet-xqwm5" [e96900d3-e678-4123-a26a-9924fdc05772] Running
	I1129 08:29:47.562573   10554 system_pods.go:89] "kube-apiserver-addons-053273" [f51f767c-fffd-4e64-b566-b1a6123060a9] Running
	I1129 08:29:47.562579   10554 system_pods.go:89] "kube-controller-manager-addons-053273" [4d4cffd1-c5e0-4542-842c-7db6cb701e0b] Running
	I1129 08:29:47.562594   10554 system_pods.go:89] "kube-ingress-dns-minikube" [2acc9ff6-35bf-4f93-b76a-4e02d6a36cf8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 08:29:47.562602   10554 system_pods.go:89] "kube-proxy-2gkdk" [fd0daf23-4091-4668-9729-627e0356bc5b] Running
	I1129 08:29:47.562608   10554 system_pods.go:89] "kube-scheduler-addons-053273" [7aed405a-86cf-411c-a827-b79a6935d5f0] Running
	I1129 08:29:47.562618   10554 system_pods.go:89] "metrics-server-85b7d694d7-48dhj" [64d2fc70-f8ba-4c90-aae2-41bb30f04b8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 08:29:47.562630   10554 system_pods.go:89] "nvidia-device-plugin-daemonset-52bjw" [111c52a4-32cd-4beb-a0ed-11bcc2e5bf21] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 08:29:47.562639   10554 system_pods.go:89] "registry-6b586f9694-gt598" [43f7eae2-b891-44d2-80dc-8650bee1c9d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 08:29:47.562648   10554 system_pods.go:89] "registry-creds-764b6fb674-ktw8b" [e715e721-8143-4728-b353-67ec7cddd186] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 08:29:47.562656   10554 system_pods.go:89] "registry-proxy-zsxkb" [417acc75-98bb-45c2-a648-a0e941620c8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 08:29:47.562666   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lrhxm" [9268b5e7-d8c2-49ed-8980-a63d87cecb6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:47.562674   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q48mh" [a4b61b82-5f2f-41e3-98a2-7b12080d1a1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:47.562681   10554 system_pods.go:89] "storage-provisioner" [42b3498b-6992-4f91-b7bd-bd29a41526d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 08:29:47.562699   10554 retry.go:31] will retry after 285.944425ms: missing components: kube-dns
	I1129 08:29:47.737706   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:47.853714   10554 system_pods.go:86] 20 kube-system pods found
	I1129 08:29:47.853749   10554 system_pods.go:89] "amd-gpu-device-plugin-d5jts" [d49cc084-87de-4151-bd07-7d32d21a3754] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1129 08:29:47.853761   10554 system_pods.go:89] "coredns-66bc5c9577-kpln4" [0c815eba-8b6a-47d4-8b05-a715b3dcd17a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 08:29:47.853772   10554 system_pods.go:89] "csi-hostpath-attacher-0" [1f391ce2-abb9-4600-8971-d31b368252f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 08:29:47.853780   10554 system_pods.go:89] "csi-hostpath-resizer-0" [945a92c3-b309-4830-a893-b5cd9c3ae0d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 08:29:47.853788   10554 system_pods.go:89] "csi-hostpathplugin-rvvrd" [813a302e-fd2c-452f-be53-e9bdf6ee6f60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 08:29:47.853794   10554 system_pods.go:89] "etcd-addons-053273" [86ef8f9c-32d8-4877-b010-6a88d85de53c] Running
	I1129 08:29:47.853800   10554 system_pods.go:89] "kindnet-xqwm5" [e96900d3-e678-4123-a26a-9924fdc05772] Running
	I1129 08:29:47.853810   10554 system_pods.go:89] "kube-apiserver-addons-053273" [f51f767c-fffd-4e64-b566-b1a6123060a9] Running
	I1129 08:29:47.853815   10554 system_pods.go:89] "kube-controller-manager-addons-053273" [4d4cffd1-c5e0-4542-842c-7db6cb701e0b] Running
	I1129 08:29:47.853823   10554 system_pods.go:89] "kube-ingress-dns-minikube" [2acc9ff6-35bf-4f93-b76a-4e02d6a36cf8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 08:29:47.853827   10554 system_pods.go:89] "kube-proxy-2gkdk" [fd0daf23-4091-4668-9729-627e0356bc5b] Running
	I1129 08:29:47.853831   10554 system_pods.go:89] "kube-scheduler-addons-053273" [7aed405a-86cf-411c-a827-b79a6935d5f0] Running
	I1129 08:29:47.853836   10554 system_pods.go:89] "metrics-server-85b7d694d7-48dhj" [64d2fc70-f8ba-4c90-aae2-41bb30f04b8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 08:29:47.853869   10554 system_pods.go:89] "nvidia-device-plugin-daemonset-52bjw" [111c52a4-32cd-4beb-a0ed-11bcc2e5bf21] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 08:29:47.853883   10554 system_pods.go:89] "registry-6b586f9694-gt598" [43f7eae2-b891-44d2-80dc-8650bee1c9d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 08:29:47.853891   10554 system_pods.go:89] "registry-creds-764b6fb674-ktw8b" [e715e721-8143-4728-b353-67ec7cddd186] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 08:29:47.853897   10554 system_pods.go:89] "registry-proxy-zsxkb" [417acc75-98bb-45c2-a648-a0e941620c8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 08:29:47.853905   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lrhxm" [9268b5e7-d8c2-49ed-8980-a63d87cecb6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:47.853918   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q48mh" [a4b61b82-5f2f-41e3-98a2-7b12080d1a1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:47.853927   10554 system_pods.go:89] "storage-provisioner" [42b3498b-6992-4f91-b7bd-bd29a41526d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 08:29:47.853946   10554 retry.go:31] will retry after 234.686233ms: missing components: kube-dns
	I1129 08:29:47.954512   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:47.954656   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:47.963323   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:48.093946   10554 system_pods.go:86] 20 kube-system pods found
	I1129 08:29:48.093977   10554 system_pods.go:89] "amd-gpu-device-plugin-d5jts" [d49cc084-87de-4151-bd07-7d32d21a3754] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1129 08:29:48.093987   10554 system_pods.go:89] "coredns-66bc5c9577-kpln4" [0c815eba-8b6a-47d4-8b05-a715b3dcd17a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 08:29:48.093998   10554 system_pods.go:89] "csi-hostpath-attacher-0" [1f391ce2-abb9-4600-8971-d31b368252f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 08:29:48.094007   10554 system_pods.go:89] "csi-hostpath-resizer-0" [945a92c3-b309-4830-a893-b5cd9c3ae0d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 08:29:48.094019   10554 system_pods.go:89] "csi-hostpathplugin-rvvrd" [813a302e-fd2c-452f-be53-e9bdf6ee6f60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 08:29:48.094029   10554 system_pods.go:89] "etcd-addons-053273" [86ef8f9c-32d8-4877-b010-6a88d85de53c] Running
	I1129 08:29:48.094036   10554 system_pods.go:89] "kindnet-xqwm5" [e96900d3-e678-4123-a26a-9924fdc05772] Running
	I1129 08:29:48.094044   10554 system_pods.go:89] "kube-apiserver-addons-053273" [f51f767c-fffd-4e64-b566-b1a6123060a9] Running
	I1129 08:29:48.094049   10554 system_pods.go:89] "kube-controller-manager-addons-053273" [4d4cffd1-c5e0-4542-842c-7db6cb701e0b] Running
	I1129 08:29:48.094060   10554 system_pods.go:89] "kube-ingress-dns-minikube" [2acc9ff6-35bf-4f93-b76a-4e02d6a36cf8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 08:29:48.094066   10554 system_pods.go:89] "kube-proxy-2gkdk" [fd0daf23-4091-4668-9729-627e0356bc5b] Running
	I1129 08:29:48.094078   10554 system_pods.go:89] "kube-scheduler-addons-053273" [7aed405a-86cf-411c-a827-b79a6935d5f0] Running
	I1129 08:29:48.094087   10554 system_pods.go:89] "metrics-server-85b7d694d7-48dhj" [64d2fc70-f8ba-4c90-aae2-41bb30f04b8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 08:29:48.094101   10554 system_pods.go:89] "nvidia-device-plugin-daemonset-52bjw" [111c52a4-32cd-4beb-a0ed-11bcc2e5bf21] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 08:29:48.094110   10554 system_pods.go:89] "registry-6b586f9694-gt598" [43f7eae2-b891-44d2-80dc-8650bee1c9d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 08:29:48.094127   10554 system_pods.go:89] "registry-creds-764b6fb674-ktw8b" [e715e721-8143-4728-b353-67ec7cddd186] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 08:29:48.094137   10554 system_pods.go:89] "registry-proxy-zsxkb" [417acc75-98bb-45c2-a648-a0e941620c8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 08:29:48.094146   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lrhxm" [9268b5e7-d8c2-49ed-8980-a63d87cecb6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:48.094157   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q48mh" [a4b61b82-5f2f-41e3-98a2-7b12080d1a1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:48.094168   10554 system_pods.go:89] "storage-provisioner" [42b3498b-6992-4f91-b7bd-bd29a41526d6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 08:29:48.094186   10554 retry.go:31] will retry after 463.425795ms: missing components: kube-dns
	I1129 08:29:48.237055   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:48.455428   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:48.455634   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:48.466616   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:48.568145   10554 system_pods.go:86] 20 kube-system pods found
	I1129 08:29:48.568179   10554 system_pods.go:89] "amd-gpu-device-plugin-d5jts" [d49cc084-87de-4151-bd07-7d32d21a3754] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1129 08:29:48.568185   10554 system_pods.go:89] "coredns-66bc5c9577-kpln4" [0c815eba-8b6a-47d4-8b05-a715b3dcd17a] Running
	I1129 08:29:48.568194   10554 system_pods.go:89] "csi-hostpath-attacher-0" [1f391ce2-abb9-4600-8971-d31b368252f6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1129 08:29:48.568205   10554 system_pods.go:89] "csi-hostpath-resizer-0" [945a92c3-b309-4830-a893-b5cd9c3ae0d7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1129 08:29:48.568215   10554 system_pods.go:89] "csi-hostpathplugin-rvvrd" [813a302e-fd2c-452f-be53-e9bdf6ee6f60] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1129 08:29:48.568220   10554 system_pods.go:89] "etcd-addons-053273" [86ef8f9c-32d8-4877-b010-6a88d85de53c] Running
	I1129 08:29:48.568227   10554 system_pods.go:89] "kindnet-xqwm5" [e96900d3-e678-4123-a26a-9924fdc05772] Running
	I1129 08:29:48.568237   10554 system_pods.go:89] "kube-apiserver-addons-053273" [f51f767c-fffd-4e64-b566-b1a6123060a9] Running
	I1129 08:29:48.568243   10554 system_pods.go:89] "kube-controller-manager-addons-053273" [4d4cffd1-c5e0-4542-842c-7db6cb701e0b] Running
	I1129 08:29:48.568255   10554 system_pods.go:89] "kube-ingress-dns-minikube" [2acc9ff6-35bf-4f93-b76a-4e02d6a36cf8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 08:29:48.568262   10554 system_pods.go:89] "kube-proxy-2gkdk" [fd0daf23-4091-4668-9729-627e0356bc5b] Running
	I1129 08:29:48.568270   10554 system_pods.go:89] "kube-scheduler-addons-053273" [7aed405a-86cf-411c-a827-b79a6935d5f0] Running
	I1129 08:29:48.568290   10554 system_pods.go:89] "metrics-server-85b7d694d7-48dhj" [64d2fc70-f8ba-4c90-aae2-41bb30f04b8f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 08:29:48.568304   10554 system_pods.go:89] "nvidia-device-plugin-daemonset-52bjw" [111c52a4-32cd-4beb-a0ed-11bcc2e5bf21] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 08:29:48.568317   10554 system_pods.go:89] "registry-6b586f9694-gt598" [43f7eae2-b891-44d2-80dc-8650bee1c9d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 08:29:48.568326   10554 system_pods.go:89] "registry-creds-764b6fb674-ktw8b" [e715e721-8143-4728-b353-67ec7cddd186] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 08:29:48.568336   10554 system_pods.go:89] "registry-proxy-zsxkb" [417acc75-98bb-45c2-a648-a0e941620c8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 08:29:48.568348   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-lrhxm" [9268b5e7-d8c2-49ed-8980-a63d87cecb6a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:48.568360   10554 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q48mh" [a4b61b82-5f2f-41e3-98a2-7b12080d1a1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1129 08:29:48.568368   10554 system_pods.go:89] "storage-provisioner" [42b3498b-6992-4f91-b7bd-bd29a41526d6] Running
	I1129 08:29:48.568379   10554 system_pods.go:126] duration metric: took 1.009861098s to wait for k8s-apps to be running ...
	I1129 08:29:48.568391   10554 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 08:29:48.568439   10554 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:29:48.585050   10554 system_svc.go:56] duration metric: took 16.648806ms WaitForService to wait for kubelet
	I1129 08:29:48.585082   10554 kubeadm.go:587] duration metric: took 43.146302041s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 08:29:48.585109   10554 node_conditions.go:102] verifying NodePressure condition ...
	I1129 08:29:48.587993   10554 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 08:29:48.588021   10554 node_conditions.go:123] node cpu capacity is 8
	I1129 08:29:48.588042   10554 node_conditions.go:105] duration metric: took 2.926567ms to run NodePressure ...
	I1129 08:29:48.588057   10554 start.go:242] waiting for startup goroutines ...
	I1129 08:29:48.737757   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:48.958442   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:48.958763   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:48.964402   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:49.238821   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:49.455019   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:49.456203   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:49.466089   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:49.738999   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:49.955503   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:49.956237   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:49.964764   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:50.238025   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:50.455466   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:50.455591   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:50.463807   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:50.737491   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:50.955219   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:50.955453   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:50.963785   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:51.237986   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:51.454745   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:51.455212   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:51.463745   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:51.737982   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:51.955169   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:51.955216   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:51.964017   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:52.237733   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:52.455516   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:52.455545   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:52.463018   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:52.737662   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:52.954675   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:52.954717   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:52.962743   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:53.237554   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:53.455641   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:53.456304   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:53.465219   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:53.737746   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:53.955126   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:53.955206   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:53.963588   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:54.237952   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:54.455129   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:54.455376   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:54.463991   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:54.737661   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:54.955579   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:54.955685   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:54.964243   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:55.238474   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:55.454680   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:55.454801   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:55.464104   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:55.737239   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:55.954515   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:55.954616   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:55.963284   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:56.237428   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:56.454431   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:56.454503   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:56.464056   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:56.737941   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:56.955247   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:56.955271   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:56.963564   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:57.237883   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:57.454762   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:57.454791   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:57.463640   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:57.738984   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:57.954797   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:57.954989   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:57.963158   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:58.236674   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:58.455160   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:58.455317   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:58.464077   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:58.738418   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:58.954586   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:58.954710   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:58.963233   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:59.237299   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:59.454309   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:59.454406   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:59.464267   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:59.737625   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:59.954879   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:59.954908   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:59.963583   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:00.237622   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:00.454538   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:00.454789   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:00.462950   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:00.738140   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:00.954585   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:00.954610   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:00.962466   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:01.237294   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:01.454549   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:01.454676   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:01.463538   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:01.737248   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:01.954292   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:01.954607   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:01.963814   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:02.237835   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:02.455172   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:02.455221   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:02.463892   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:02.737625   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:02.954678   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:02.954720   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:02.963029   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:03.238237   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:03.454535   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:03.454609   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:03.464322   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:03.737149   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:03.954365   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:03.954465   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:03.964580   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:04.237060   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:04.454994   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:04.455055   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:04.463010   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:04.737907   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:04.954948   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:04.955003   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:04.963759   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:05.238025   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:05.455164   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:05.455194   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:05.463555   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:05.737413   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:05.954270   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:05.954395   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:05.963569   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:06.237372   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:06.454235   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:06.454272   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:06.463440   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:06.737134   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:06.953788   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:06.953928   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:06.963146   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:07.236811   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:07.454419   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:07.454451   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:07.463485   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:07.737828   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:07.954781   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:07.955671   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:07.963051   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:08.238148   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:08.454137   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:08.454355   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:08.463811   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:08.738188   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:08.955225   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:08.955276   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:08.963778   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:09.237445   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:09.454763   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:09.454977   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:09.463662   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:09.737462   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:09.954541   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:09.954616   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:09.964043   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:10.237989   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:10.454904   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:10.455026   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:10.463700   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:10.739524   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:10.954752   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:10.954820   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:10.963249   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:11.237380   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:11.454276   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:11.454383   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:11.463188   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:11.738961   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:11.956123   10554 kapi.go:107] duration metric: took 1m5.004912689s to wait for kubernetes.io/minikube-addons=registry ...
	I1129 08:30:11.956398   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:11.964269   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:12.239754   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:12.455470   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:12.464175   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:12.737462   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:12.953836   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:12.963118   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:13.236626   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:13.454918   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:13.463294   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:13.738189   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:13.955657   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:13.963291   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:14.236857   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:14.454990   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:14.463616   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:14.737125   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:14.955004   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:14.963628   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:15.237477   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:15.455042   10554 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:15.464057   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:15.736957   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:15.955199   10554 kapi.go:107] duration metric: took 1m9.003989625s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1129 08:30:15.963735   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:16.238064   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:16.463526   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:16.737301   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:16.964699   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:17.237720   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:17.463995   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:17.737613   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:17.963964   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:18.237888   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:18.463994   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:18.742656   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:18.963917   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:19.238704   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:19.463882   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:19.737834   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:19.963664   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:20.237357   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:20.465087   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:20.739458   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:20.964343   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:21.236630   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:21.463954   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:21.738212   10554 kapi.go:107] duration metric: took 1m8.004054797s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1129 08:30:21.739626   10554 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-053273 cluster.
	I1129 08:30:21.741294   10554 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1129 08:30:21.742562   10554 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1129 08:30:21.964611   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:22.464344   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:22.963885   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:23.463615   10554 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:23.964504   10554 kapi.go:107] duration metric: took 1m16.504077956s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1129 08:30:23.966165   10554 out.go:179] * Enabled addons: cloud-spanner, storage-provisioner, registry-creds, amd-gpu-device-plugin, inspektor-gadget, ingress-dns, metrics-server, yakd, default-storageclass, nvidia-device-plugin, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1129 08:30:23.967288   10554 addons.go:530] duration metric: took 1m18.528488133s for enable addons: enabled=[cloud-spanner storage-provisioner registry-creds amd-gpu-device-plugin inspektor-gadget ingress-dns metrics-server yakd default-storageclass nvidia-device-plugin volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1129 08:30:23.967329   10554 start.go:247] waiting for cluster config update ...
	I1129 08:30:23.967345   10554 start.go:256] writing updated cluster config ...
	I1129 08:30:23.967590   10554 ssh_runner.go:195] Run: rm -f paused
	I1129 08:30:23.971658   10554 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 08:30:23.974687   10554 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kpln4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:23.978642   10554 pod_ready.go:94] pod "coredns-66bc5c9577-kpln4" is "Ready"
	I1129 08:30:23.978663   10554 pod_ready.go:86] duration metric: took 3.953847ms for pod "coredns-66bc5c9577-kpln4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:23.980464   10554 pod_ready.go:83] waiting for pod "etcd-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:23.984014   10554 pod_ready.go:94] pod "etcd-addons-053273" is "Ready"
	I1129 08:30:23.984038   10554 pod_ready.go:86] duration metric: took 3.552545ms for pod "etcd-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:23.985653   10554 pod_ready.go:83] waiting for pod "kube-apiserver-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:23.988881   10554 pod_ready.go:94] pod "kube-apiserver-addons-053273" is "Ready"
	I1129 08:30:23.988904   10554 pod_ready.go:86] duration metric: took 3.229995ms for pod "kube-apiserver-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:23.990537   10554 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:24.375559   10554 pod_ready.go:94] pod "kube-controller-manager-addons-053273" is "Ready"
	I1129 08:30:24.375589   10554 pod_ready.go:86] duration metric: took 385.028985ms for pod "kube-controller-manager-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:24.575685   10554 pod_ready.go:83] waiting for pod "kube-proxy-2gkdk" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:24.975284   10554 pod_ready.go:94] pod "kube-proxy-2gkdk" is "Ready"
	I1129 08:30:24.975311   10554 pod_ready.go:86] duration metric: took 399.604211ms for pod "kube-proxy-2gkdk" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:25.175384   10554 pod_ready.go:83] waiting for pod "kube-scheduler-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:25.574942   10554 pod_ready.go:94] pod "kube-scheduler-addons-053273" is "Ready"
	I1129 08:30:25.574972   10554 pod_ready.go:86] duration metric: took 399.564843ms for pod "kube-scheduler-addons-053273" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:30:25.574988   10554 pod_ready.go:40] duration metric: took 1.603296646s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 08:30:25.619034   10554 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 08:30:25.621029   10554 out.go:179] * Done! kubectl is now configured to use "addons-053273" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 08:30:26 addons-053273 crio[769]: time="2025-11-29T08:30:26.508727903Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ba74fe71-eade-4c29-b06a-ca5e876feb73 name=/runtime.v1.ImageService/PullImage
	Nov 29 08:30:26 addons-053273 crio[769]: time="2025-11-29T08:30:26.510122472Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 08:30:27 addons-053273 crio[769]: time="2025-11-29T08:30:27.753477713Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=ba74fe71-eade-4c29-b06a-ca5e876feb73 name=/runtime.v1.ImageService/PullImage
	Nov 29 08:30:27 addons-053273 crio[769]: time="2025-11-29T08:30:27.754101402Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=edd9b108-fbdd-42f1-8790-8d664e558d09 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 08:30:27 addons-053273 crio[769]: time="2025-11-29T08:30:27.755451315Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ff6d8bfb-db4c-47b6-a75c-1483415f52fd name=/runtime.v1.ImageService/ImageStatus
	Nov 29 08:30:27 addons-053273 crio[769]: time="2025-11-29T08:30:27.759218381Z" level=info msg="Creating container: default/busybox/busybox" id=e637429a-ce80-49da-ab6a-ee7254012b41 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 08:30:27 addons-053273 crio[769]: time="2025-11-29T08:30:27.759321786Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 08:30:27 addons-053273 crio[769]: time="2025-11-29T08:30:27.764413761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 08:30:27 addons-053273 crio[769]: time="2025-11-29T08:30:27.764810291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 08:30:27 addons-053273 crio[769]: time="2025-11-29T08:30:27.795738274Z" level=info msg="Created container 13245012d20a501bfaf853fc334ad417bc6308bb26c6737f951753d82b1a4bb5: default/busybox/busybox" id=e637429a-ce80-49da-ab6a-ee7254012b41 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 08:30:27 addons-053273 crio[769]: time="2025-11-29T08:30:27.796316163Z" level=info msg="Starting container: 13245012d20a501bfaf853fc334ad417bc6308bb26c6737f951753d82b1a4bb5" id=349f5ead-05ae-47bb-a16d-184067cd3183 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 08:30:27 addons-053273 crio[769]: time="2025-11-29T08:30:27.797967691Z" level=info msg="Started container" PID=6225 containerID=13245012d20a501bfaf853fc334ad417bc6308bb26c6737f951753d82b1a4bb5 description=default/busybox/busybox id=349f5ead-05ae-47bb-a16d-184067cd3183 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fe54bde1470e4afc866118c7810a6785afa9bfea88621c59a50fcbb31bca1073
	Nov 29 08:30:36 addons-053273 crio[769]: time="2025-11-29T08:30:36.43932893Z" level=info msg="Running pod sandbox: default/nginx/POD" id=b4643584-a9e4-463d-b428-252c265a1d21 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 08:30:36 addons-053273 crio[769]: time="2025-11-29T08:30:36.439434456Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 08:30:36 addons-053273 crio[769]: time="2025-11-29T08:30:36.44702118Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:f7ecb3bf99b7cf0eb2e12f8f3b23020c5211f48b20555d253015fe3bbc90730e UID:659f30f3-f651-4f47-8941-c7e89b0ae22d NetNS:/var/run/netns/9287a278-3420-4340-817b-47ea0efa8181 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c8c3f8}] Aliases:map[]}"
	Nov 29 08:30:36 addons-053273 crio[769]: time="2025-11-29T08:30:36.447061353Z" level=info msg="Adding pod default_nginx to CNI network \"kindnet\" (type=ptp)"
	Nov 29 08:30:36 addons-053273 crio[769]: time="2025-11-29T08:30:36.458953411Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:f7ecb3bf99b7cf0eb2e12f8f3b23020c5211f48b20555d253015fe3bbc90730e UID:659f30f3-f651-4f47-8941-c7e89b0ae22d NetNS:/var/run/netns/9287a278-3420-4340-817b-47ea0efa8181 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c8c3f8}] Aliases:map[]}"
	Nov 29 08:30:36 addons-053273 crio[769]: time="2025-11-29T08:30:36.45917331Z" level=info msg="Checking pod default_nginx for CNI network kindnet (type=ptp)"
	Nov 29 08:30:36 addons-053273 crio[769]: time="2025-11-29T08:30:36.460106276Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 29 08:30:36 addons-053273 crio[769]: time="2025-11-29T08:30:36.460944083Z" level=info msg="Ran pod sandbox f7ecb3bf99b7cf0eb2e12f8f3b23020c5211f48b20555d253015fe3bbc90730e with infra container: default/nginx/POD" id=b4643584-a9e4-463d-b428-252c265a1d21 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 08:30:36 addons-053273 crio[769]: time="2025-11-29T08:30:36.462216377Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=93baa26b-8d19-4a7d-993a-7d9c75675f09 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 08:30:36 addons-053273 crio[769]: time="2025-11-29T08:30:36.462375021Z" level=info msg="Image docker.io/nginx:alpine not found" id=93baa26b-8d19-4a7d-993a-7d9c75675f09 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 08:30:36 addons-053273 crio[769]: time="2025-11-29T08:30:36.462431516Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=93baa26b-8d19-4a7d-993a-7d9c75675f09 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 08:30:36 addons-053273 crio[769]: time="2025-11-29T08:30:36.463049091Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=f71c5f53-1130-4204-bd1f-05aea5911ac4 name=/runtime.v1.ImageService/PullImage
	Nov 29 08:30:36 addons-053273 crio[769]: time="2025-11-29T08:30:36.466816223Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	13245012d20a5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          9 seconds ago        Running             busybox                                  0                   fe54bde1470e4       busybox                                    default
	36eaff53da4c2       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          14 seconds ago       Running             csi-snapshotter                          0                   ffc88537c4dd3       csi-hostpathplugin-rvvrd                   kube-system
	ad9d7cb785c36       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          15 seconds ago       Running             csi-provisioner                          0                   ffc88537c4dd3       csi-hostpathplugin-rvvrd                   kube-system
	ab372da9d790c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 16 seconds ago       Running             gcp-auth                                 0                   9c6ee2cbafb88       gcp-auth-78565c9fb4-msfdv                  gcp-auth
	d81ef79dbc162       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            17 seconds ago       Running             liveness-probe                           0                   ffc88537c4dd3       csi-hostpathplugin-rvvrd                   kube-system
	847e89dd7511c       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           18 seconds ago       Running             hostpath                                 0                   ffc88537c4dd3       csi-hostpathplugin-rvvrd                   kube-system
	d11c4362f35fc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            19 seconds ago       Running             gadget                                   0                   939d498dc4ebe       gadget-bcrxg                               gadget
	f6278bb7605d8       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                21 seconds ago       Running             node-driver-registrar                    0                   ffc88537c4dd3       csi-hostpathplugin-rvvrd                   kube-system
	6c65eafee7031       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             21 seconds ago       Running             controller                               0                   c14c3fa0cf38b       ingress-nginx-controller-6c8bf45fb-49927   ingress-nginx
	0d8ff87c1bbda       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              25 seconds ago       Running             registry-proxy                           0                   17d0d15b00d60       registry-proxy-zsxkb                       kube-system
	4f7efb63753a4       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     27 seconds ago       Running             nvidia-device-plugin-ctr                 0                   f72dc54d2bd40       nvidia-device-plugin-daemonset-52bjw       kube-system
	8170e81797256       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   31 seconds ago       Exited              patch                                    0                   29e2c0c086806       gcp-auth-certs-patch-fnppk                 gcp-auth
	5858e5f4e311e       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   31 seconds ago       Running             csi-external-health-monitor-controller   0                   ffc88537c4dd3       csi-hostpathplugin-rvvrd                   kube-system
	8a3ab9b2ba3d0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   32 seconds ago       Exited              create                                   0                   2979da1aa488b       gcp-auth-certs-create-q9rzr                gcp-auth
	73fddce698a89       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     32 seconds ago       Running             amd-gpu-device-plugin                    0                   372be2070663b       amd-gpu-device-plugin-d5jts                kube-system
	e00a3c9e09fd2       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      33 seconds ago       Running             volume-snapshot-controller               0                   a19ef62f6d68c       snapshot-controller-7d9fbc56b8-lrhxm       kube-system
	25051b979640f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      33 seconds ago       Running             volume-snapshot-controller               0                   f5106142b8759       snapshot-controller-7d9fbc56b8-q48mh       kube-system
	89340d645aa11       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              34 seconds ago       Running             csi-resizer                              0                   366a3e72ccee3       csi-hostpath-resizer-0                     kube-system
	5a1b833eaac88       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   35 seconds ago       Exited              patch                                    0                   3ac42e5437d83       ingress-nginx-admission-patch-hhlkx        ingress-nginx
	bf1f819dfa2fc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   35 seconds ago       Exited              create                                   0                   8ce18433a8ddd       ingress-nginx-admission-create-mnxkr       ingress-nginx
	b181b0edd3181       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               36 seconds ago       Running             cloud-spanner-emulator                   0                   c06a8c8cff9e8       cloud-spanner-emulator-5bdddb765-4krxw     default
	5974b84e4fa77       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             39 seconds ago       Running             csi-attacher                             0                   3966d25f69b01       csi-hostpath-attacher-0                    kube-system
	e84f185b68fcf       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              40 seconds ago       Running             yakd                                     0                   31cba181447cd       yakd-dashboard-5ff678cb9-bxgzw             yakd-dashboard
	a39e2096a5720       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             42 seconds ago       Running             local-path-provisioner                   0                   3233fa902512f       local-path-provisioner-648f6765c9-9nfgr    local-path-storage
	cccc47c19980c       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           43 seconds ago       Running             registry                                 0                   0b08f455bffca       registry-6b586f9694-gt598                  kube-system
	9dd9f49e78582       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               44 seconds ago       Running             minikube-ingress-dns                     0                   767d0f7a26bc3       kube-ingress-dns-minikube                  kube-system
	ab73d32295897       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        48 seconds ago       Running             metrics-server                           0                   562083ab5cb26       metrics-server-85b7d694d7-48dhj            kube-system
	5415e4a1867a4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             49 seconds ago       Running             coredns                                  0                   dddb7aeaa7f34       coredns-66bc5c9577-kpln4                   kube-system
	f2891b2f589bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             49 seconds ago       Running             storage-provisioner                      0                   2a3c640c2c0fd       storage-provisioner                        kube-system
	536ae01d9c834       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   382f06683ff51       kube-proxy-2gkdk                           kube-system
	e7636733a471a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   aa0650db1b9c7       kindnet-xqwm5                              kube-system
	e64fa5518306f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   b51e06b2e8480       kube-apiserver-addons-053273               kube-system
	42985a54cfd5e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   5186d76f5e9a8       kube-scheduler-addons-053273               kube-system
	97e8e47987d6c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   cb112577a4bda       etcd-addons-053273                         kube-system
	fa034ef6fed4e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   08a1beb60974a       kube-controller-manager-addons-053273      kube-system
	
	
	==> coredns [5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b] <==
	[INFO] 10.244.0.19:33637 - 9499 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128378s
	[INFO] 10.244.0.19:52986 - 44394 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000082625s
	[INFO] 10.244.0.19:52986 - 44659 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012483s
	[INFO] 10.244.0.19:50750 - 59555 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000079938s
	[INFO] 10.244.0.19:50750 - 59866 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000119907s
	[INFO] 10.244.0.19:37320 - 11197 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000086306s
	[INFO] 10.244.0.19:37320 - 10959 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000127712s
	[INFO] 10.244.0.19:56765 - 46691 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000058902s
	[INFO] 10.244.0.19:56765 - 46861 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.0000989s
	[INFO] 10.244.0.19:38114 - 15585 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00009945s
	[INFO] 10.244.0.19:38114 - 15187 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000144649s
	[INFO] 10.244.0.22:60805 - 30405 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000207425s
	[INFO] 10.244.0.22:34518 - 19658 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000289369s
	[INFO] 10.244.0.22:46861 - 58856 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117449s
	[INFO] 10.244.0.22:57994 - 18616 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000160006s
	[INFO] 10.244.0.22:43633 - 13106 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119502s
	[INFO] 10.244.0.22:32875 - 29187 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000181054s
	[INFO] 10.244.0.22:48355 - 63318 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004851052s
	[INFO] 10.244.0.22:55104 - 15886 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00670072s
	[INFO] 10.244.0.22:41019 - 52352 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004179153s
	[INFO] 10.244.0.22:48708 - 17314 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004324775s
	[INFO] 10.244.0.22:38782 - 636 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003679518s
	[INFO] 10.244.0.22:51141 - 39325 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007276491s
	[INFO] 10.244.0.22:46590 - 39527 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002272497s
	[INFO] 10.244.0.22:41575 - 28120 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002160821s
	
	
	==> describe nodes <==
	Name:               addons-053273
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-053273
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=addons-053273
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T08_29_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-053273
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-053273"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 08:28:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-053273
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 08:30:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 08:30:31 +0000   Sat, 29 Nov 2025 08:28:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 08:30:31 +0000   Sat, 29 Nov 2025 08:28:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 08:30:31 +0000   Sat, 29 Nov 2025 08:28:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 08:30:31 +0000   Sat, 29 Nov 2025 08:29:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-053273
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                1ccd1d4c-726c-4b43-bb60-99ea539b61bc
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-5bdddb765-4krxw      0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  gadget                      gadget-bcrxg                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  gcp-auth                    gcp-auth-78565c9fb4-msfdv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-49927    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         91s
	  kube-system                 amd-gpu-device-plugin-d5jts                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 coredns-66bc5c9577-kpln4                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     92s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 csi-hostpathplugin-rvvrd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 etcd-addons-053273                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         98s
	  kube-system                 kindnet-xqwm5                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      93s
	  kube-system                 kube-apiserver-addons-053273                250m (3%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-addons-053273       200m (2%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-2gkdk                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-addons-053273                100m (1%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 metrics-server-85b7d694d7-48dhj             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         91s
	  kube-system                 nvidia-device-plugin-daemonset-52bjw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 registry-6b586f9694-gt598                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 registry-creds-764b6fb674-ktw8b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 registry-proxy-zsxkb                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 snapshot-controller-7d9fbc56b8-lrhxm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 snapshot-controller-7d9fbc56b8-q48mh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  local-path-storage          local-path-provisioner-648f6765c9-9nfgr     0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-bxgzw              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 90s   kube-proxy       
	  Normal  Starting                 98s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  98s   kubelet          Node addons-053273 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s   kubelet          Node addons-053273 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s   kubelet          Node addons-053273 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           93s   node-controller  Node addons-053273 event: Registered Node addons-053273 in Controller
	  Normal  NodeReady                50s   kubelet          Node addons-053273 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 08:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001749] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085014] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405416] i8042: Warning: Keylock active
	[  +0.011437] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505353] block sda: the capability attribute has been deprecated.
	[  +0.088968] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025527] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.969002] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31] <==
	{"level":"warn","ts":"2025-11-29T08:28:56.625943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.632403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.638651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.650976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.657498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.663836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.669548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.676257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.682174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.688134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.694137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.699830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.705938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.712661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.718385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.736113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.742801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.749666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:28:56.797436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:29:08.012975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:29:08.019874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:29:34.188720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:29:34.194933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:29:34.215480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:29:34.222003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34732","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [ab372da9d790cd8b860ba694fee97376b9817e969256c677c3f3d3d70d64cabb] <==
	2025/11/29 08:30:21 GCP Auth Webhook started!
	2025/11/29 08:30:25 Ready to marshal response ...
	2025/11/29 08:30:25 Ready to write response ...
	2025/11/29 08:30:26 Ready to marshal response ...
	2025/11/29 08:30:26 Ready to write response ...
	2025/11/29 08:30:26 Ready to marshal response ...
	2025/11/29 08:30:26 Ready to write response ...
	2025/11/29 08:30:36 Ready to marshal response ...
	2025/11/29 08:30:36 Ready to write response ...
	
	
	==> kernel <==
	 08:30:37 up 13 min,  0 user,  load average: 1.91, 0.87, 0.33
	Linux addons-053273 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8] <==
	I1129 08:29:06.670010       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T08:29:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 08:29:06.960485       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 08:29:06.960534       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 08:29:06.960545       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 08:29:06.960738       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1129 08:29:36.884916       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1129 08:29:36.961682       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1129 08:29:36.961724       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1129 08:29:36.969871       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1129 08:29:38.460759       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 08:29:38.460782       1 metrics.go:72] Registering metrics
	I1129 08:29:38.460878       1 controller.go:711] "Syncing nftables rules"
	I1129 08:29:46.885328       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:29:46.885399       1 main.go:301] handling current node
	I1129 08:29:56.882276       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:29:56.882331       1 main.go:301] handling current node
	I1129 08:30:06.882344       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:30:06.882397       1 main.go:301] handling current node
	I1129 08:30:16.882446       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:30:16.882478       1 main.go:301] handling current node
	I1129 08:30:26.882539       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:30:26.882599       1 main.go:301] handling current node
	I1129 08:30:36.882588       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:30:36.882634       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1129 08:29:50.472246       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.45.99:443: connect: connection refused" logger="UnhandledError"
	E1129 08:29:50.477700       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.45.99:443: connect: connection refused" logger="UnhandledError"
	E1129 08:29:50.498880       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.45.99:443: connect: connection refused" logger="UnhandledError"
	W1129 08:29:51.473944       1 handler_proxy.go:99] no RequestInfo found in the context
	W1129 08:29:51.473963       1 handler_proxy.go:99] no RequestInfo found in the context
	E1129 08:29:51.473989       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1129 08:29:51.474004       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1129 08:29:51.474041       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1129 08:29:51.475155       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1129 08:29:55.544866       1 handler_proxy.go:99] no RequestInfo found in the context
	E1129 08:29:55.544941       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1129 08:29:55.544960       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.45.99:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1129 08:29:55.555322       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1129 08:30:35.322021       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55098: use of closed network connection
	E1129 08:30:35.466726       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55118: use of closed network connection
	I1129 08:30:35.974320       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1129 08:30:36.184531       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.246.158"}
	
	
	==> kube-controller-manager [fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6] <==
	I1129 08:29:04.175415       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 08:29:04.175442       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 08:29:04.175464       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 08:29:04.175528       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1129 08:29:04.175538       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 08:29:04.175559       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 08:29:04.176467       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 08:29:04.177943       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1129 08:29:04.178010       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1129 08:29:04.178055       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 08:29:04.178062       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 08:29:04.178067       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1129 08:29:04.179110       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 08:29:04.180288       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 08:29:04.183987       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-053273" podCIDRs=["10.244.0.0/24"]
	I1129 08:29:04.196514       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1129 08:29:06.790328       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1129 08:29:34.183531       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1129 08:29:34.183647       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1129 08:29:34.183695       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1129 08:29:34.205679       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1129 08:29:34.209271       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1129 08:29:34.284542       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 08:29:34.309827       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 08:29:49.129719       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686] <==
	I1129 08:29:06.675750       1 server_linux.go:53] "Using iptables proxy"
	I1129 08:29:06.763457       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 08:29:06.864738       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 08:29:06.864772       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1129 08:29:06.864872       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 08:29:06.895686       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 08:29:06.895743       1 server_linux.go:132] "Using iptables Proxier"
	I1129 08:29:06.902470       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 08:29:06.907786       1 server.go:527] "Version info" version="v1.34.1"
	I1129 08:29:06.907827       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 08:29:06.909339       1 config.go:106] "Starting endpoint slice config controller"
	I1129 08:29:06.909369       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 08:29:06.909404       1 config.go:200] "Starting service config controller"
	I1129 08:29:06.909411       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 08:29:06.909428       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 08:29:06.909434       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 08:29:06.909483       1 config.go:309] "Starting node config controller"
	I1129 08:29:06.909498       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 08:29:07.009767       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 08:29:07.009778       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 08:29:07.009798       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 08:29:07.009811       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b] <==
	E1129 08:28:57.196714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 08:28:57.197714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 08:28:57.197826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 08:28:57.197859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 08:28:57.197935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 08:28:57.197939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 08:28:57.197981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 08:28:57.198017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 08:28:57.198072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 08:28:57.198088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 08:28:57.198254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 08:28:57.198290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 08:28:57.198522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 08:28:58.011712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 08:28:58.021640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 08:28:58.068996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 08:28:58.070860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 08:28:58.080864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 08:28:58.091802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 08:28:58.113231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 08:28:58.258776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 08:28:58.295912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 08:28:58.418323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1129 08:28:58.418489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1129 08:29:01.194108       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 08:30:07 addons-053273 kubelet[1261]: I1129 08:30:07.651381    1261 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsb7q\" (UniqueName: \"kubernetes.io/projected/5c894b8b-f4f2-40ac-8351-4f8b3bb9b06e-kube-api-access-vsb7q\") pod \"5c894b8b-f4f2-40ac-8351-4f8b3bb9b06e\" (UID: \"5c894b8b-f4f2-40ac-8351-4f8b3bb9b06e\") "
	Nov 29 08:30:07 addons-053273 kubelet[1261]: I1129 08:30:07.653471    1261 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c894b8b-f4f2-40ac-8351-4f8b3bb9b06e-kube-api-access-vsb7q" (OuterVolumeSpecName: "kube-api-access-vsb7q") pod "5c894b8b-f4f2-40ac-8351-4f8b3bb9b06e" (UID: "5c894b8b-f4f2-40ac-8351-4f8b3bb9b06e"). InnerVolumeSpecName "kube-api-access-vsb7q". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 29 08:30:07 addons-053273 kubelet[1261]: I1129 08:30:07.751983    1261 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vsb7q\" (UniqueName: \"kubernetes.io/projected/5c894b8b-f4f2-40ac-8351-4f8b3bb9b06e-kube-api-access-vsb7q\") on node \"addons-053273\" DevicePath \"\""
	Nov 29 08:30:08 addons-053273 kubelet[1261]: I1129 08:30:08.534758    1261 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29e2c0c0868061ec7f564cda6c550593d826562f72d0bbb41de5eff5735d84de"
	Nov 29 08:30:10 addons-053273 kubelet[1261]: I1129 08:30:10.541968    1261 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-52bjw" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 08:30:10 addons-053273 kubelet[1261]: I1129 08:30:10.554349    1261 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-52bjw" podStartSLOduration=1.625611841 podStartE2EDuration="23.554330323s" podCreationTimestamp="2025-11-29 08:29:47 +0000 UTC" firstStartedPulling="2025-11-29 08:29:47.621353739 +0000 UTC m=+48.389142647" lastFinishedPulling="2025-11-29 08:30:09.550072231 +0000 UTC m=+70.317861129" observedRunningTime="2025-11-29 08:30:10.553422464 +0000 UTC m=+71.321211379" watchObservedRunningTime="2025-11-29 08:30:10.554330323 +0000 UTC m=+71.322119240"
	Nov 29 08:30:11 addons-053273 kubelet[1261]: I1129 08:30:11.549473    1261 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zsxkb" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 08:30:11 addons-053273 kubelet[1261]: I1129 08:30:11.549611    1261 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-52bjw" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 08:30:11 addons-053273 kubelet[1261]: I1129 08:30:11.559652    1261 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-zsxkb" podStartSLOduration=0.896448337 podStartE2EDuration="24.559635009s" podCreationTimestamp="2025-11-29 08:29:47 +0000 UTC" firstStartedPulling="2025-11-29 08:29:47.699811544 +0000 UTC m=+48.467600452" lastFinishedPulling="2025-11-29 08:30:11.362998206 +0000 UTC m=+72.130787124" observedRunningTime="2025-11-29 08:30:11.559066799 +0000 UTC m=+72.326855715" watchObservedRunningTime="2025-11-29 08:30:11.559635009 +0000 UTC m=+72.327423925"
	Nov 29 08:30:12 addons-053273 kubelet[1261]: I1129 08:30:12.553218    1261 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zsxkb" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 08:30:18 addons-053273 kubelet[1261]: I1129 08:30:18.598140    1261 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-49927" podStartSLOduration=60.607931092 podStartE2EDuration="1m12.598120937s" podCreationTimestamp="2025-11-29 08:29:06 +0000 UTC" firstStartedPulling="2025-11-29 08:30:03.154544683 +0000 UTC m=+63.922333582" lastFinishedPulling="2025-11-29 08:30:15.14473453 +0000 UTC m=+75.912523427" observedRunningTime="2025-11-29 08:30:15.577918986 +0000 UTC m=+76.345707902" watchObservedRunningTime="2025-11-29 08:30:18.598120937 +0000 UTC m=+79.365909852"
	Nov 29 08:30:18 addons-053273 kubelet[1261]: I1129 08:30:18.598326    1261 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-bcrxg" podStartSLOduration=65.903221168 podStartE2EDuration="1m12.598318405s" podCreationTimestamp="2025-11-29 08:29:06 +0000 UTC" firstStartedPulling="2025-11-29 08:30:11.33847481 +0000 UTC m=+72.106263705" lastFinishedPulling="2025-11-29 08:30:18.033572041 +0000 UTC m=+78.801360942" observedRunningTime="2025-11-29 08:30:18.597753444 +0000 UTC m=+79.365542359" watchObservedRunningTime="2025-11-29 08:30:18.598318405 +0000 UTC m=+79.366107321"
	Nov 29 08:30:19 addons-053273 kubelet[1261]: E1129 08:30:19.044827    1261 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 29 08:30:19 addons-053273 kubelet[1261]: E1129 08:30:19.044969    1261 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e715e721-8143-4728-b353-67ec7cddd186-gcr-creds podName:e715e721-8143-4728-b353-67ec7cddd186 nodeName:}" failed. No retries permitted until 2025-11-29 08:30:51.044945044 +0000 UTC m=+111.812733941 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/e715e721-8143-4728-b353-67ec7cddd186-gcr-creds") pod "registry-creds-764b6fb674-ktw8b" (UID: "e715e721-8143-4728-b353-67ec7cddd186") : secret "registry-creds-gcr" not found
	Nov 29 08:30:20 addons-053273 kubelet[1261]: I1129 08:30:20.360139    1261 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 29 08:30:20 addons-053273 kubelet[1261]: I1129 08:30:20.360186    1261 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 29 08:30:21 addons-053273 kubelet[1261]: I1129 08:30:21.704887    1261 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-msfdv" podStartSLOduration=66.975106833 podStartE2EDuration="1m8.7048644s" podCreationTimestamp="2025-11-29 08:29:13 +0000 UTC" firstStartedPulling="2025-11-29 08:30:19.361777313 +0000 UTC m=+80.129566208" lastFinishedPulling="2025-11-29 08:30:21.091534876 +0000 UTC m=+81.859323775" observedRunningTime="2025-11-29 08:30:21.61124168 +0000 UTC m=+82.379030596" watchObservedRunningTime="2025-11-29 08:30:21.7048644 +0000 UTC m=+82.472653297"
	Nov 29 08:30:23 addons-053273 kubelet[1261]: I1129 08:30:23.622298    1261 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-rvvrd" podStartSLOduration=1.27798287 podStartE2EDuration="36.62227642s" podCreationTimestamp="2025-11-29 08:29:47 +0000 UTC" firstStartedPulling="2025-11-29 08:29:47.618621364 +0000 UTC m=+48.386410272" lastFinishedPulling="2025-11-29 08:30:22.962914913 +0000 UTC m=+83.730703822" observedRunningTime="2025-11-29 08:30:23.621792078 +0000 UTC m=+84.389580994" watchObservedRunningTime="2025-11-29 08:30:23.62227642 +0000 UTC m=+84.390065337"
	Nov 29 08:30:26 addons-053273 kubelet[1261]: I1129 08:30:26.300831    1261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br7th\" (UniqueName: \"kubernetes.io/projected/985cf3af-eaa2-4b5b-a465-7777bcef18d9-kube-api-access-br7th\") pod \"busybox\" (UID: \"985cf3af-eaa2-4b5b-a465-7777bcef18d9\") " pod="default/busybox"
	Nov 29 08:30:26 addons-053273 kubelet[1261]: I1129 08:30:26.300917    1261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/985cf3af-eaa2-4b5b-a465-7777bcef18d9-gcp-creds\") pod \"busybox\" (UID: \"985cf3af-eaa2-4b5b-a465-7777bcef18d9\") " pod="default/busybox"
	Nov 29 08:30:28 addons-053273 kubelet[1261]: I1129 08:30:28.644988    1261 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.3984985349999999 podStartE2EDuration="2.64497088s" podCreationTimestamp="2025-11-29 08:30:26 +0000 UTC" firstStartedPulling="2025-11-29 08:30:26.508419073 +0000 UTC m=+87.276207987" lastFinishedPulling="2025-11-29 08:30:27.754891437 +0000 UTC m=+88.522680332" observedRunningTime="2025-11-29 08:30:28.644182452 +0000 UTC m=+89.411971370" watchObservedRunningTime="2025-11-29 08:30:28.64497088 +0000 UTC m=+89.412759796"
	Nov 29 08:30:35 addons-053273 kubelet[1261]: E1129 08:30:35.466675    1261 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39314->127.0.0.1:38331: write tcp 127.0.0.1:39314->127.0.0.1:38331: write: broken pipe
	Nov 29 08:30:36 addons-053273 kubelet[1261]: I1129 08:30:36.180014    1261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k678\" (UniqueName: \"kubernetes.io/projected/659f30f3-f651-4f47-8941-c7e89b0ae22d-kube-api-access-8k678\") pod \"nginx\" (UID: \"659f30f3-f651-4f47-8941-c7e89b0ae22d\") " pod="default/nginx"
	Nov 29 08:30:36 addons-053273 kubelet[1261]: I1129 08:30:36.180072    1261 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/659f30f3-f651-4f47-8941-c7e89b0ae22d-gcp-creds\") pod \"nginx\" (UID: \"659f30f3-f651-4f47-8941-c7e89b0ae22d\") " pod="default/nginx"
	Nov 29 08:30:37 addons-053273 kubelet[1261]: I1129 08:30:37.315132    1261 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4618ae45-99c1-4a7f-bb08-b76aa5b17586" path="/var/lib/kubelet/pods/4618ae45-99c1-4a7f-bb08-b76aa5b17586/volumes"
	
	
	==> storage-provisioner [f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989] <==
	W1129 08:30:11.757217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:13.760932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:13.764756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:15.767658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:15.772734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:17.775972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:17.779939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:19.782510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:19.787100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:21.790058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:21.796702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:23.799708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:23.803487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:25.806154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:25.809611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:27.812374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:27.816934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:29.819951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:29.823552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:31.826659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:31.830507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:33.833145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:33.836746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:35.839792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:30:35.843391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-053273 -n addons-053273
helpers_test.go:269: (dbg) Run:  kubectl --context addons-053273 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-mnxkr ingress-nginx-admission-patch-hhlkx registry-creds-764b6fb674-ktw8b
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-053273 describe pod ingress-nginx-admission-create-mnxkr ingress-nginx-admission-patch-hhlkx registry-creds-764b6fb674-ktw8b
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-053273 describe pod ingress-nginx-admission-create-mnxkr ingress-nginx-admission-patch-hhlkx registry-creds-764b6fb674-ktw8b: exit status 1 (56.108464ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-mnxkr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-hhlkx" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-ktw8b" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-053273 describe pod ingress-nginx-admission-create-mnxkr ingress-nginx-admission-patch-hhlkx registry-creds-764b6fb674-ktw8b: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 addons disable headlamp --alsologtostderr -v=1: exit status 11 (243.442062ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:30:38.273784   20017 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:30:38.273956   20017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:38.273968   20017 out.go:374] Setting ErrFile to fd 2...
	I1129 08:30:38.273975   20017 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:38.274171   20017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:30:38.274462   20017 mustload.go:66] Loading cluster: addons-053273
	I1129 08:30:38.275567   20017 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:38.275601   20017 addons.go:622] checking whether the cluster is paused
	I1129 08:30:38.275803   20017 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:38.276008   20017 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:30:38.276512   20017 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:30:38.295085   20017 ssh_runner.go:195] Run: systemctl --version
	I1129 08:30:38.295148   20017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:30:38.311532   20017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:30:38.410455   20017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:30:38.410544   20017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:30:38.438419   20017 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:30:38.438437   20017 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:30:38.438441   20017 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:30:38.438444   20017 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:30:38.438448   20017 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:30:38.438451   20017 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:30:38.438454   20017 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:30:38.438456   20017 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:30:38.438459   20017 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:30:38.438464   20017 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:30:38.438467   20017 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:30:38.438470   20017 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:30:38.438474   20017 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:30:38.438476   20017 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:30:38.438479   20017 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:30:38.438484   20017 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:30:38.438487   20017 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:30:38.438492   20017 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:30:38.438495   20017 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:30:38.438497   20017 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:30:38.438503   20017 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:30:38.438505   20017 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:30:38.438509   20017 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:30:38.438512   20017 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:30:38.438515   20017 cri.go:89] found id: ""
	I1129 08:30:38.438552   20017 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:30:38.452051   20017 out.go:203] 
	W1129 08:30:38.453065   20017 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:30:38.453085   20017 out.go:285] * 
	* 
	W1129 08:30:38.456093   20017 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:30:38.457486   20017 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-053273 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.74s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-4krxw" [19ad1d6a-978d-4686-b4af-5f598dace8ac] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003607561s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (248.570494ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:30:59.471511   21963 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:30:59.471860   21963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:59.471871   21963 out.go:374] Setting ErrFile to fd 2...
	I1129 08:30:59.471875   21963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:59.472061   21963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:30:59.472349   21963 mustload.go:66] Loading cluster: addons-053273
	I1129 08:30:59.472650   21963 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:59.472668   21963 addons.go:622] checking whether the cluster is paused
	I1129 08:30:59.472748   21963 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:59.472762   21963 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:30:59.473121   21963 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:30:59.491129   21963 ssh_runner.go:195] Run: systemctl --version
	I1129 08:30:59.491180   21963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:30:59.509005   21963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:30:59.609920   21963 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:30:59.609998   21963 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:30:59.639504   21963 cri.go:89] found id: "8026c36cf38b7fdb674df2a6d65c677c169135d15515b8264ddf60493330acdd"
	I1129 08:30:59.639542   21963 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:30:59.639547   21963 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:30:59.639553   21963 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:30:59.639558   21963 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:30:59.639565   21963 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:30:59.639569   21963 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:30:59.639573   21963 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:30:59.639578   21963 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:30:59.639592   21963 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:30:59.639600   21963 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:30:59.639603   21963 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:30:59.639606   21963 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:30:59.639608   21963 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:30:59.639611   21963 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:30:59.639626   21963 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:30:59.639633   21963 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:30:59.639638   21963 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:30:59.639641   21963 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:30:59.639644   21963 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:30:59.639646   21963 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:30:59.639649   21963 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:30:59.639652   21963 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:30:59.639655   21963 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:30:59.639658   21963 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:30:59.639661   21963 cri.go:89] found id: ""
	I1129 08:30:59.639723   21963 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:30:59.653735   21963 out.go:203] 
	W1129 08:30:59.654999   21963 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:30:59.655024   21963 out.go:285] * 
	* 
	W1129 08:30:59.658127   21963 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:30:59.659478   21963 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-053273 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-053273 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-053273 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-053273 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [9e34c6ce-9f0b-442d-b2a2-25f31c78deff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [9e34c6ce-9f0b-442d-b2a2-25f31c78deff] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [9e34c6ce-9f0b-442d-b2a2-25f31c78deff] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003162008s
addons_test.go:967: (dbg) Run:  kubectl --context addons-053273 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 ssh "cat /opt/local-path-provisioner/pvc-e4d98104-0771-4855-8667-9a2fb8670c8c_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-053273 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-053273 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 addons disable storage-provisioner-rancher --alsologtostderr -v=1
2025/11/29 08:30:48 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (262.377738ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:30:48.945215   21211 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:30:48.945552   21211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:48.945569   21211 out.go:374] Setting ErrFile to fd 2...
	I1129 08:30:48.945576   21211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:48.945877   21211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:30:48.946246   21211 mustload.go:66] Loading cluster: addons-053273
	I1129 08:30:48.946545   21211 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:48.946563   21211 addons.go:622] checking whether the cluster is paused
	I1129 08:30:48.946639   21211 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:48.946653   21211 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:30:48.947040   21211 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:30:48.965258   21211 ssh_runner.go:195] Run: systemctl --version
	I1129 08:30:48.965311   21211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:30:48.984367   21211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:30:49.088587   21211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:30:49.088651   21211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:30:49.117876   21211 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:30:49.117912   21211 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:30:49.117918   21211 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:30:49.117922   21211 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:30:49.117924   21211 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:30:49.117928   21211 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:30:49.117931   21211 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:30:49.117933   21211 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:30:49.117936   21211 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:30:49.117941   21211 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:30:49.117945   21211 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:30:49.117948   21211 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:30:49.117951   21211 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:30:49.117955   21211 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:30:49.117958   21211 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:30:49.117967   21211 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:30:49.117972   21211 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:30:49.117977   21211 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:30:49.117980   21211 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:30:49.117982   21211 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:30:49.117985   21211 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:30:49.117988   21211 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:30:49.117994   21211 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:30:49.117997   21211 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:30:49.118000   21211 cri.go:89] found id: ""
	I1129 08:30:49.118036   21211 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:30:49.131634   21211 out.go:203] 
	W1129 08:30:49.133243   21211 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:30:49.133262   21211 out.go:285] * 
	* 
	W1129 08:30:49.136208   21211 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:30:49.138742   21211 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-053273 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-52bjw" [111c52a4-32cd-4beb-a0ed-11bcc2e5bf21] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004022505s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (251.509527ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:30:54.284364   21743 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:30:54.284487   21743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:54.284495   21743 out.go:374] Setting ErrFile to fd 2...
	I1129 08:30:54.284499   21743 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:54.284703   21743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:30:54.284959   21743 mustload.go:66] Loading cluster: addons-053273
	I1129 08:30:54.285272   21743 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:54.285289   21743 addons.go:622] checking whether the cluster is paused
	I1129 08:30:54.285367   21743 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:54.285381   21743 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:30:54.285734   21743 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:30:54.305744   21743 ssh_runner.go:195] Run: systemctl --version
	I1129 08:30:54.305801   21743 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:30:54.324129   21743 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:30:54.425389   21743 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:30:54.425468   21743 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:30:54.454207   21743 cri.go:89] found id: "8026c36cf38b7fdb674df2a6d65c677c169135d15515b8264ddf60493330acdd"
	I1129 08:30:54.454244   21743 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:30:54.454249   21743 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:30:54.454252   21743 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:30:54.454254   21743 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:30:54.454258   21743 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:30:54.454261   21743 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:30:54.454265   21743 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:30:54.454267   21743 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:30:54.454279   21743 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:30:54.454282   21743 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:30:54.454285   21743 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:30:54.454288   21743 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:30:54.454291   21743 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:30:54.454293   21743 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:30:54.454305   21743 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:30:54.454313   21743 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:30:54.454317   21743 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:30:54.454320   21743 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:30:54.454323   21743 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:30:54.454329   21743 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:30:54.454331   21743 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:30:54.454334   21743 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:30:54.454337   21743 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:30:54.454343   21743 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:30:54.454345   21743 cri.go:89] found id: ""
	I1129 08:30:54.454389   21743 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:30:54.468900   21743 out.go:203] 
	W1129 08:30:54.470488   21743 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:30:54.470514   21743 out.go:285] * 
	* 
	W1129 08:30:54.473544   21743 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:30:54.474813   21743 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-053273 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-bxgzw" [bb08ab1f-54ed-4a82-a724-527d5aee448c] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005123048s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 addons disable yakd --alsologtostderr -v=1: exit status 11 (257.637454ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:30:54.207594   21719 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:30:54.207789   21719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:54.207799   21719 out.go:374] Setting ErrFile to fd 2...
	I1129 08:30:54.207806   21719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:54.208006   21719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:30:54.208340   21719 mustload.go:66] Loading cluster: addons-053273
	I1129 08:30:54.208704   21719 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:54.208729   21719 addons.go:622] checking whether the cluster is paused
	I1129 08:30:54.208827   21719 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:54.208862   21719 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:30:54.209266   21719 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:30:54.229139   21719 ssh_runner.go:195] Run: systemctl --version
	I1129 08:30:54.229196   21719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:30:54.248218   21719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:30:54.352253   21719 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:30:54.352323   21719 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:30:54.382059   21719 cri.go:89] found id: "8026c36cf38b7fdb674df2a6d65c677c169135d15515b8264ddf60493330acdd"
	I1129 08:30:54.382079   21719 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:30:54.382083   21719 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:30:54.382086   21719 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:30:54.382089   21719 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:30:54.382092   21719 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:30:54.382095   21719 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:30:54.382098   21719 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:30:54.382100   21719 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:30:54.382105   21719 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:30:54.382108   21719 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:30:54.382110   21719 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:30:54.382113   21719 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:30:54.382116   21719 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:30:54.382118   21719 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:30:54.382125   21719 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:30:54.382128   21719 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:30:54.382132   21719 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:30:54.382135   21719 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:30:54.382138   21719 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:30:54.382143   21719 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:30:54.382155   21719 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:30:54.382161   21719 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:30:54.382164   21719 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:30:54.382166   21719 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:30:54.382169   21719 cri.go:89] found id: ""
	I1129 08:30:54.382206   21719 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:30:54.396438   21719 out.go:203] 
	W1129 08:30:54.398626   21719 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:30:54.398647   21719 out.go:285] * 
	* 
	W1129 08:30:54.401709   21719 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:30:54.403023   21719 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-053273 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-d5jts" [d49cc084-87de-4151-bd07-7d32d21a3754] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003660358s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-053273 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-053273 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (254.833117ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:30:49.215421   21399 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:30:49.215993   21399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:49.216009   21399 out.go:374] Setting ErrFile to fd 2...
	I1129 08:30:49.216015   21399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:30:49.216278   21399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:30:49.216641   21399 mustload.go:66] Loading cluster: addons-053273
	I1129 08:30:49.217128   21399 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:49.217153   21399 addons.go:622] checking whether the cluster is paused
	I1129 08:30:49.217280   21399 config.go:182] Loaded profile config "addons-053273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:30:49.217302   21399 host.go:66] Checking if "addons-053273" exists ...
	I1129 08:30:49.217733   21399 cli_runner.go:164] Run: docker container inspect addons-053273 --format={{.State.Status}}
	I1129 08:30:49.237172   21399 ssh_runner.go:195] Run: systemctl --version
	I1129 08:30:49.237224   21399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-053273
	I1129 08:30:49.256304   21399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/addons-053273/id_rsa Username:docker}
	I1129 08:30:49.356382   21399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:30:49.356485   21399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:30:49.385095   21399 cri.go:89] found id: "36eaff53da4c2f6b42add6d16769dbf72367cea99a40071022ca508b78f204e3"
	I1129 08:30:49.385130   21399 cri.go:89] found id: "ad9d7cb785c36f52b00257c3dc218a265f0c419810e304289cb4bd6ae54ea0fb"
	I1129 08:30:49.385135   21399 cri.go:89] found id: "d81ef79dbc162f4439586944a6f6e7ec9ee49b46c9f99570a494e65d9e844e1c"
	I1129 08:30:49.385140   21399 cri.go:89] found id: "847e89dd7511ca217724747e5a7d3d3e92a9a1203557c6d825fac2ac93fc42de"
	I1129 08:30:49.385143   21399 cri.go:89] found id: "f6278bb7605d8f529b23928a2ff314c13962d33818b19c2e815efc915bdd97b4"
	I1129 08:30:49.385147   21399 cri.go:89] found id: "0d8ff87c1bbdabd06fd59e49a398d261ba1eac9b8e8918bdd207dee2b82f25fe"
	I1129 08:30:49.385150   21399 cri.go:89] found id: "4f7efb63753a43998a433bef512bb3d0227ff650217a030621d7f3865343ac15"
	I1129 08:30:49.385153   21399 cri.go:89] found id: "5858e5f4e311e0929b2f180d850e946a59fcb892b40f826155bc49556663a6c3"
	I1129 08:30:49.385155   21399 cri.go:89] found id: "73fddce698a89389e28548b830f2e792a469943f50f2062c0e8ab0f768c2fdf3"
	I1129 08:30:49.385176   21399 cri.go:89] found id: "e00a3c9e09fd212280a726f4b34a8ae118921841c54dd3f203d4bfbdfe524136"
	I1129 08:30:49.385181   21399 cri.go:89] found id: "25051b979640fb9f0ca6b2fefbbf729dbf13d7807bc2a5bbe509c1427a1450e4"
	I1129 08:30:49.385186   21399 cri.go:89] found id: "89340d645aa11e58c4854339529354f343983cd634d11dc662ffd693c5c439ce"
	I1129 08:30:49.385190   21399 cri.go:89] found id: "5974b84e4fa7722f053401f2d8bab3dd46415a2c49bbe27ca534168936995dda"
	I1129 08:30:49.385195   21399 cri.go:89] found id: "cccc47c19980c886590d29fb88ee7a2c36254653e3f67fc92d57130b657fb514"
	I1129 08:30:49.385204   21399 cri.go:89] found id: "9dd9f49e78582101e727a4a329080a23511be2b196b7bd5a49d11bb18bb9456e"
	I1129 08:30:49.385216   21399 cri.go:89] found id: "ab73d32295897c398656e0ef9e0bde92cfc96cc5b9ea3e9e791a5408cd0a58fe"
	I1129 08:30:49.385222   21399 cri.go:89] found id: "5415e4a1867a4762dcd4722642096717b802f122cd9dcf2f049fe3c310535a6b"
	I1129 08:30:49.385226   21399 cri.go:89] found id: "f2891b2f589bc2452e45d6acdd71b5d05e0031591db3c871a122da94ae85e989"
	I1129 08:30:49.385229   21399 cri.go:89] found id: "536ae01d9c834fda23c10b1cd958984657b851b56d40240995ae742e32fa0686"
	I1129 08:30:49.385231   21399 cri.go:89] found id: "e7636733a471a3a3f946995b88ecad291811005e4acb271d6a45893ae7c423d8"
	I1129 08:30:49.385234   21399 cri.go:89] found id: "e64fa5518306f9c5652b32ea5acb6e82c7ee8c3579473ec40514f095b17e4677"
	I1129 08:30:49.385237   21399 cri.go:89] found id: "42985a54cfd5e3183cf34b9ec4e83a5bb0dacfbaf9b293c28fdbc26c9e683b7b"
	I1129 08:30:49.385240   21399 cri.go:89] found id: "97e8e47987d6cc872f7c55ace3420ac14fe7648d2b6b68805f7ec77909d9de31"
	I1129 08:30:49.385243   21399 cri.go:89] found id: "fa034ef6fed4ecde89cfb42c0d21d8c3f6bf1bb0f3863938a47c874ee0d12dc6"
	I1129 08:30:49.385248   21399 cri.go:89] found id: ""
	I1129 08:30:49.385306   21399 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 08:30:49.399406   21399 out.go:203] 
	W1129 08:30:49.400518   21399 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:30:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 08:30:49.400540   21399 out.go:285] * 
	* 
	W1129 08:30:49.403447   21399 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 08:30:49.404783   21399 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-053273 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-137675 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-137675 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-7rhkc" [618a56c9-6f4d-41f2-bd5f-2305c4f48474] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-137675 -n functional-137675
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-29 08:45:53.573803645 +0000 UTC m=+1063.733085314
functional_test.go:1645: (dbg) Run:  kubectl --context functional-137675 describe po hello-node-connect-7d85dfc575-7rhkc -n default
functional_test.go:1645: (dbg) kubectl --context functional-137675 describe po hello-node-connect-7d85dfc575-7rhkc -n default:
Name:             hello-node-connect-7d85dfc575-7rhkc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-137675/192.168.49.2
Start Time:       Sat, 29 Nov 2025 08:35:53 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mcptt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-mcptt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7rhkc to functional-137675
Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m2s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-137675 logs hello-node-connect-7d85dfc575-7rhkc -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-137675 logs hello-node-connect-7d85dfc575-7rhkc -n default: exit status 1 (65.758294ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-7rhkc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-137675 logs hello-node-connect-7d85dfc575-7rhkc -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-137675 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-7rhkc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-137675/192.168.49.2
Start Time:       Sat, 29 Nov 2025 08:35:53 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mcptt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-mcptt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7rhkc to functional-137675
Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m2s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-137675 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-137675 logs -l app=hello-node-connect: exit status 1 (59.368932ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-7rhkc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-137675 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-137675 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.161.51
IPs:                      10.102.161.51
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30730/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-137675
helpers_test.go:243: (dbg) docker inspect functional-137675:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "633eaade2fa1eca5b2b78b76e7cf87556bdece53a6051e18516461f63545d624",
	        "Created": "2025-11-29T08:34:17.87612697Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33023,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T08:34:17.91001322Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/633eaade2fa1eca5b2b78b76e7cf87556bdece53a6051e18516461f63545d624/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/633eaade2fa1eca5b2b78b76e7cf87556bdece53a6051e18516461f63545d624/hostname",
	        "HostsPath": "/var/lib/docker/containers/633eaade2fa1eca5b2b78b76e7cf87556bdece53a6051e18516461f63545d624/hosts",
	        "LogPath": "/var/lib/docker/containers/633eaade2fa1eca5b2b78b76e7cf87556bdece53a6051e18516461f63545d624/633eaade2fa1eca5b2b78b76e7cf87556bdece53a6051e18516461f63545d624-json.log",
	        "Name": "/functional-137675",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-137675:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-137675",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "633eaade2fa1eca5b2b78b76e7cf87556bdece53a6051e18516461f63545d624",
	                "LowerDir": "/var/lib/docker/overlay2/55ec676112ff036242b89bd98e23be133d164cbfc46023665a164ed2e83688e0-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/55ec676112ff036242b89bd98e23be133d164cbfc46023665a164ed2e83688e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/55ec676112ff036242b89bd98e23be133d164cbfc46023665a164ed2e83688e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/55ec676112ff036242b89bd98e23be133d164cbfc46023665a164ed2e83688e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-137675",
	                "Source": "/var/lib/docker/volumes/functional-137675/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-137675",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-137675",
	                "name.minikube.sigs.k8s.io": "functional-137675",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ab07e5868da30c1a2a06097b5ad0ca7913f45df1a79a0029baee5621600ffbdf",
	            "SandboxKey": "/var/run/docker/netns/ab07e5868da3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-137675": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "edd9b219ebeec97a916eb90f5bde4366107cecb64bc2faae3091e3df7cff6a94",
	                    "EndpointID": "0b705a6375036b7de5c638df5d07ae35e02397f4956215a2158775278a874112",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "22:e3:4d:f7:9a:1f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-137675",
	                        "633eaade2fa1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-137675 -n functional-137675
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-137675 logs -n 25: (1.255137267s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-137675 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1742623872/001:/mount2 --alsologtostderr -v=1        │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │                     │
	│ ssh            │ functional-137675 ssh findmnt -T /mount1                                                                                  │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ ssh            │ functional-137675 ssh findmnt -T /mount2                                                                                  │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ ssh            │ functional-137675 ssh findmnt -T /mount3                                                                                  │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ mount          │ -p functional-137675 --kill=true                                                                                          │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-137675 --alsologtostderr -v=1                                                            │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ cp             │ functional-137675 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                        │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ ssh            │ functional-137675 ssh -n functional-137675 sudo cat /home/docker/cp-test.txt                                              │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ cp             │ functional-137675 cp functional-137675:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd582501262/001/cp-test.txt │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ ssh            │ functional-137675 ssh -n functional-137675 sudo cat /home/docker/cp-test.txt                                              │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ cp             │ functional-137675 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ ssh            │ functional-137675 ssh -n functional-137675 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ ssh            │ functional-137675 ssh sudo cat /etc/test/nested/copy/9216/hosts                                                           │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ ssh            │ functional-137675 ssh echo hello                                                                                          │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ ssh            │ functional-137675 ssh cat /etc/hostname                                                                                   │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ image          │ functional-137675 image ls --format short --alsologtostderr                                                               │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ image          │ functional-137675 image ls --format json --alsologtostderr                                                                │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ image          │ functional-137675 image ls --format table --alsologtostderr                                                               │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ image          │ functional-137675 image ls --format yaml --alsologtostderr                                                                │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ ssh            │ functional-137675 ssh pgrep buildkitd                                                                                     │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │                     │
	│ image          │ functional-137675 image build -t localhost/my-image:functional-137675 testdata/build --alsologtostderr                    │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ image          │ functional-137675 image ls                                                                                                │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ update-context │ functional-137675 update-context --alsologtostderr -v=2                                                                   │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ update-context │ functional-137675 update-context --alsologtostderr -v=2                                                                   │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	│ update-context │ functional-137675 update-context --alsologtostderr -v=2                                                                   │ functional-137675 │ jenkins │ v1.37.0 │ 29 Nov 25 08:36 UTC │ 29 Nov 25 08:36 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 08:36:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 08:36:00.447054   43947 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:36:00.447332   43947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:36:00.447344   43947 out.go:374] Setting ErrFile to fd 2...
	I1129 08:36:00.447351   43947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:36:00.447662   43947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:36:00.448245   43947 out.go:368] Setting JSON to false
	I1129 08:36:00.449248   43947 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1112,"bootTime":1764404248,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:36:00.449309   43947 start.go:143] virtualization: kvm guest
	I1129 08:36:00.451786   43947 out.go:179] * [functional-137675] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 08:36:00.453733   43947 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 08:36:00.453736   43947 notify.go:221] Checking for updates...
	I1129 08:36:00.455294   43947 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:36:00.456927   43947 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 08:36:00.458426   43947 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 08:36:00.460046   43947 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 08:36:00.461380   43947 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 08:36:00.463096   43947 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:36:00.463831   43947 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:36:00.492825   43947 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 08:36:00.493038   43947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:36:00.569085   43947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-29 08:36:00.556565577 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:36:00.569234   43947 docker.go:319] overlay module found
	I1129 08:36:00.571009   43947 out.go:179] * Using the docker driver based on existing profile
	I1129 08:36:00.572117   43947 start.go:309] selected driver: docker
	I1129 08:36:00.572144   43947 start.go:927] validating driver "docker" against &{Name:functional-137675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-137675 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:36:00.572254   43947 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 08:36:00.572359   43947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:36:00.641542   43947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-29 08:36:00.629963288 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:36:00.642282   43947 cni.go:84] Creating CNI manager for ""
	I1129 08:36:00.642362   43947 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 08:36:00.642436   43947 start.go:353] cluster config:
	{Name:functional-137675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-137675 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:36:00.644864   43947 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.031281534Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.03134893Z" level=info msg="Removed pod sandbox: df60edca67d9029a692bac613634e5df975f65570d5c63fd56d698d95a9d862e" id=4b8ffd21-0648-40ad-a56e-025670323dc1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.031706194Z" level=info msg="Stopping pod sandbox: 7dc308b2d6a5fecf6731da4f05c4626834d77a1a37fbe818e6f6f171220138ca" id=abcf6a1c-8f20-4c03-9acf-257441773507 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.03175483Z" level=info msg="Stopped pod sandbox (already stopped): 7dc308b2d6a5fecf6731da4f05c4626834d77a1a37fbe818e6f6f171220138ca" id=abcf6a1c-8f20-4c03-9acf-257441773507 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.032070849Z" level=info msg="Removing pod sandbox: 7dc308b2d6a5fecf6731da4f05c4626834d77a1a37fbe818e6f6f171220138ca" id=3c7cc580-2124-4c20-be8c-0cddbddfe892 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.037166798Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.037228614Z" level=info msg="Removed pod sandbox: 7dc308b2d6a5fecf6731da4f05c4626834d77a1a37fbe818e6f6f171220138ca" id=3c7cc580-2124-4c20-be8c-0cddbddfe892 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.083079911Z" level=info msg="Pulled image: docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da" id=5c6a392f-46d3-4be8-b7c3-3935b038d6cd name=/runtime.v1.ImageService/PullImage
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.083792112Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=8568c074-a06e-4ddb-b242-9e64504c7c99 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.086554243Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=f14f3860-2505-41fc-8b8c-090430e5b86b name=/runtime.v1.ImageService/ImageStatus
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.091393938Z" level=info msg="Creating container: default/mysql-5bb876957f-ml9z2/mysql" id=1fe62f7d-a392-401d-8d76-572fb719f6e1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.091531067Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.098285101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.098918255Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.132051689Z" level=info msg="Created container 144baac623a1b80236b531acf42d640d71904d73b4a7179d0af696930aaf79e8: default/mysql-5bb876957f-ml9z2/mysql" id=1fe62f7d-a392-401d-8d76-572fb719f6e1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.132922669Z" level=info msg="Starting container: 144baac623a1b80236b531acf42d640d71904d73b4a7179d0af696930aaf79e8" id=f3ad4f97-47a1-4b93-afdc-81777d4ba6f2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 08:36:23 functional-137675 crio[3583]: time="2025-11-29T08:36:23.134797873Z" level=info msg="Started container" PID=7435 containerID=144baac623a1b80236b531acf42d640d71904d73b4a7179d0af696930aaf79e8 description=default/mysql-5bb876957f-ml9z2/mysql id=f3ad4f97-47a1-4b93-afdc-81777d4ba6f2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b06384a6bf6207bc859d0e8c1680cce38a10e3747a9a43b1f31cc4ca0ebd7662
	Nov 29 08:36:33 functional-137675 crio[3583]: time="2025-11-29T08:36:33.970912881Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=964e4606-6dd0-41b8-a7a8-e06d85da5c55 name=/runtime.v1.ImageService/PullImage
	Nov 29 08:36:33 functional-137675 crio[3583]: time="2025-11-29T08:36:33.971655823Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4b828c21-ef3b-49dc-b2fa-aa5e1093ce4c name=/runtime.v1.ImageService/PullImage
	Nov 29 08:37:26 functional-137675 crio[3583]: time="2025-11-29T08:37:26.970250047Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=59f987b6-44e3-4554-988c-f59e9da0b544 name=/runtime.v1.ImageService/PullImage
	Nov 29 08:37:28 functional-137675 crio[3583]: time="2025-11-29T08:37:28.971922732Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=32b23e94-9c39-41dd-9cf9-2f747c6efb20 name=/runtime.v1.ImageService/PullImage
	Nov 29 08:38:51 functional-137675 crio[3583]: time="2025-11-29T08:38:51.971047885Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=29e035f9-b61c-4b07-8297-0b9d2a6ac1e9 name=/runtime.v1.ImageService/PullImage
	Nov 29 08:38:59 functional-137675 crio[3583]: time="2025-11-29T08:38:59.970215271Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d908aefc-5f64-4846-b3c7-2e1ca5be66e4 name=/runtime.v1.ImageService/PullImage
	Nov 29 08:41:36 functional-137675 crio[3583]: time="2025-11-29T08:41:36.97062598Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d6b5f5a0-3981-4e64-b0c0-2b90513d4d00 name=/runtime.v1.ImageService/PullImage
	Nov 29 08:41:49 functional-137675 crio[3583]: time="2025-11-29T08:41:49.970666758Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7aff3543-f5b1-48ed-a9d9-33bf9f46d249 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	144baac623a1b       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   b06384a6bf620       mysql-5bb876957f-ml9z2                       default
	0a3df97340e9a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   6c0899523d7a0       kubernetes-dashboard-855c9754f9-psnw7        kubernetes-dashboard
	31978d861c73c       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   4d58d5f747d43       dashboard-metrics-scraper-77bf4d6c4c-4hpxf   kubernetes-dashboard
	c1bf240ce1090       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   ac25075313c29       sp-pod                                       default
	7375a87efa0eb       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   d270354a146f8       busybox-mount                                default
	2d57d095dfcbc       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   ea4cf1fea9a87       nginx-svc                                    default
	90daaa6a41610       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   b736feb3fda6e       kube-apiserver-functional-137675             kube-system
	a709181cbffc9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     1                   1b510832bbc65       kube-controller-manager-functional-137675    kube-system
	2b8525f56d1de       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   e7f3eef8b415c       etcd-functional-137675                       kube-system
	8de2914f87754       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   1515405b74455       kube-scheduler-functional-137675             kube-system
	df3aac3d3638a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   fe2a9c9bc93dd       kube-proxy-vk5zx                             kube-system
	14fd0e85ffea8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   2497ec1798fc1       kindnet-qzvr5                                kube-system
	82638469a6dbf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   a26225943b74d       storage-provisioner                          kube-system
	9dbc7da95eb54       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   c489e45250add       coredns-66bc5c9577-5f622                     kube-system
	796a7d6534ec6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   c489e45250add       coredns-66bc5c9577-5f622                     kube-system
	2c1932f726229       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   a26225943b74d       storage-provisioner                          kube-system
	5a2e41ad60d26       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   2497ec1798fc1       kindnet-qzvr5                                kube-system
	a13c98375e6ff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   fe2a9c9bc93dd       kube-proxy-vk5zx                             kube-system
	12f8271f4bdf3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   e7f3eef8b415c       etcd-functional-137675                       kube-system
	76d89aac204a4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Exited              kube-controller-manager     0                   1b510832bbc65       kube-controller-manager-functional-137675    kube-system
	29d2e0ccf30eb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   1515405b74455       kube-scheduler-functional-137675             kube-system
	
	
	==> coredns [796a7d6534ec6dee1e08c8bfcdc7038e2bca9301a4b28cf3c5afa3f64768ad88] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45406 - 60208 "HINFO IN 5544595186180358333.5002515797846821023. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.061262735s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9dbc7da95eb543c56d15c080d1621cf8f832f2f274b6208eaad7b739c6277830] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42430 - 65354 "HINFO IN 2221760462000293847.885129850953460843. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.044965553s
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-137675
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-137675
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=functional-137675
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T08_34_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 08:34:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-137675
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 08:45:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 08:45:46 +0000   Sat, 29 Nov 2025 08:34:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 08:45:46 +0000   Sat, 29 Nov 2025 08:34:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 08:45:46 +0000   Sat, 29 Nov 2025 08:34:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 08:45:46 +0000   Sat, 29 Nov 2025 08:34:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-137675
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                c1d87dbd-5d99-4cf1-bd61-75425669ab93
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-gr27b                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m59s
	  default                     hello-node-connect-7d85dfc575-7rhkc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-ml9z2                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m37s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 coredns-66bc5c9577-5f622                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-137675                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-qzvr5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-137675              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-137675     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-vk5zx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-137675              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4hpxf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-psnw7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-137675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-137675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-137675 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-137675 event: Registered Node functional-137675 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-137675 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-137675 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-137675 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-137675 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-137675 event: Registered Node functional-137675 in Controller
	
	
	==> dmesg <==
	[  +0.088968] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025527] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.969002] kauditd_printk_skb: 47 callbacks suppressed
	[Nov29 08:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.030577] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +2.047756] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +4.031543] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[Nov29 08:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[ +16.382281] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[ +32.252561] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	
	
	==> etcd [12f8271f4bdf36c1f61f7b6767bdf135212b76228478c9a2e44afb6edc472fd6] <==
	{"level":"warn","ts":"2025-11-29T08:34:27.276441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:34:27.283001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:34:27.290128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:34:27.305286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:34:27.311293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:34:27.317607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:34:27.364185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37496","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-29T08:35:21.036160Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-29T08:35:21.036277Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-137675","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-29T08:35:21.036369Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-29T08:35:21.037875Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-29T08:35:21.037939Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T08:35:21.037959Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-29T08:35:21.038029Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-29T08:35:21.038070Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-29T08:35:21.038077Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-29T08:35:21.038073Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-29T08:35:21.038106Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-29T08:35:21.038113Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-11-29T08:35:21.038086Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T08:35:21.038058Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-29T08:35:21.040140Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-29T08:35:21.040203Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T08:35:21.040225Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-29T08:35:21.040231Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-137675","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [2b8525f56d1de4d7f2c1cc7853b2fb8a9c6ada26e66e495e2a34a99fb15684f3] <==
	{"level":"warn","ts":"2025-11-29T08:35:24.589786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.598502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.605000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.612135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.618346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.624369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.631125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.637154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.643375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.653973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.659915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.672880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.679063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.685359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.691564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.704921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.717866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.729053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.732185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.738520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T08:35:24.744893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35840","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-29T08:36:22.089734Z","caller":"traceutil/trace.go:172","msg":"trace[442448761] transaction","detail":"{read_only:false; response_revision:834; number_of_response:1; }","duration":"113.759511ms","start":"2025-11-29T08:36:21.975952Z","end":"2025-11-29T08:36:22.089711Z","steps":["trace[442448761] 'process raft request'  (duration: 111.783512ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T08:45:24.325887Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1135}
	{"level":"info","ts":"2025-11-29T08:45:24.345291Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1135,"took":"19.045261ms","hash":3010498478,"current-db-size-bytes":3510272,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-29T08:45:24.345334Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3010498478,"revision":1135,"compact-revision":-1}
	
	
	==> kernel <==
	 08:45:55 up 28 min,  0 user,  load average: 0.19, 0.16, 0.24
	Linux functional-137675 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [14fd0e85ffea800604f91de437a72bc04e7f820b2568d76efb1a774ff3647d16] <==
	I1129 08:43:50.877821       1 main.go:301] handling current node
	I1129 08:44:00.868828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:44:00.868892       1 main.go:301] handling current node
	I1129 08:44:10.873604       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:44:10.873637       1 main.go:301] handling current node
	I1129 08:44:20.877683       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:44:20.877714       1 main.go:301] handling current node
	I1129 08:44:30.870987       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:44:30.871016       1 main.go:301] handling current node
	I1129 08:44:40.872994       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:44:40.873029       1 main.go:301] handling current node
	I1129 08:44:50.869589       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:44:50.869624       1 main.go:301] handling current node
	I1129 08:45:00.868541       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:45:00.868586       1 main.go:301] handling current node
	I1129 08:45:10.868568       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:45:10.868599       1 main.go:301] handling current node
	I1129 08:45:20.878278       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:45:20.878310       1 main.go:301] handling current node
	I1129 08:45:30.870911       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:45:30.870959       1 main.go:301] handling current node
	I1129 08:45:40.869352       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:45:40.869388       1 main.go:301] handling current node
	I1129 08:45:50.869374       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:45:50.869408       1 main.go:301] handling current node
	
	
	==> kindnet [5a2e41ad60d266faf7a876d60625e2f2f0ffa1ee9cce924d7818d413b394c8f0] <==
	I1129 08:34:36.344360       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 08:34:36.344649       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1129 08:34:36.344807       1 main.go:148] setting mtu 1500 for CNI 
	I1129 08:34:36.344825       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 08:34:36.344861       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T08:34:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 08:34:36.547468       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 08:34:36.547488       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 08:34:36.547497       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 08:34:36.639701       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 08:34:36.939834       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 08:34:36.939891       1 metrics.go:72] Registering metrics
	I1129 08:34:36.939956       1 controller.go:711] "Syncing nftables rules"
	I1129 08:34:46.549978       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:34:46.550016       1 main.go:301] handling current node
	I1129 08:34:56.547076       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:34:56.547137       1 main.go:301] handling current node
	I1129 08:35:06.552228       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1129 08:35:06.552288       1 main.go:301] handling current node
	
	
	==> kube-apiserver [90daaa6a416104f98b3cc63cb9c7eb7e986175af4f04d254ca4047953fbfbc9b] <==
	I1129 08:35:25.969131       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 08:35:25.969131       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 08:35:26.140817       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1129 08:35:26.355666       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1129 08:35:26.356893       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 08:35:26.361195       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 08:35:26.810859       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 08:35:26.901633       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 08:35:26.951806       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 08:35:26.957447       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 08:35:28.869201       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 08:35:45.875328       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.187.131"}
	I1129 08:35:50.797776       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.120.83"}
	I1129 08:35:53.234251       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.161.51"}
	I1129 08:35:55.361120       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.229.3"}
	E1129 08:36:07.811047       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:33648: use of closed network connection
	I1129 08:36:11.850733       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 08:36:11.958023       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.139.21"}
	I1129 08:36:11.976024       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.130.170"}
	E1129 08:36:14.372656       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37816: use of closed network connection
	I1129 08:36:17.320462       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.149.147"}
	E1129 08:36:29.439903       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47502: use of closed network connection
	E1129 08:36:30.856762       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47516: use of closed network connection
	E1129 08:36:32.137791       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60710: use of closed network connection
	I1129 08:45:25.164602       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [76d89aac204a4022278129c80d0ce1298e9cad18ceb54364208a53f45342810d] <==
	I1129 08:34:34.758416       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 08:34:34.758463       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 08:34:34.758501       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 08:34:34.758521       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 08:34:34.758502       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 08:34:34.758654       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 08:34:34.759743       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 08:34:34.759791       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 08:34:34.759830       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 08:34:34.759863       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 08:34:34.761893       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 08:34:34.761996       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 08:34:34.762139       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-137675"
	I1129 08:34:34.762240       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1129 08:34:34.762641       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1129 08:34:34.762710       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1129 08:34:34.762757       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 08:34:34.762765       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 08:34:34.762772       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1129 08:34:34.764679       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 08:34:34.767673       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 08:34:34.768310       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-137675" podCIDRs=["10.244.0.0/24"]
	I1129 08:34:34.769280       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 08:34:34.786438       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 08:34:49.764474       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [a709181cbffc920153775c9eec3e494d1bd6330bb5c3befc9fce651e44ff8544] <==
	I1129 08:35:28.564366       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 08:35:28.564387       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 08:35:28.564382       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 08:35:28.564389       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 08:35:28.564404       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 08:35:28.564905       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 08:35:28.564913       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 08:35:28.564989       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 08:35:28.565669       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 08:35:28.565735       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 08:35:28.569918       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 08:35:28.575105       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 08:35:28.581275       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1129 08:35:28.584555       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 08:35:28.586908       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 08:35:28.597295       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 08:35:28.599487       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 08:35:28.599506       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 08:35:28.599513       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1129 08:36:11.895099       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1129 08:36:11.899944       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1129 08:36:11.903807       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1129 08:36:11.904259       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1129 08:36:11.908380       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1129 08:36:11.912532       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [a13c98375e6ffa57a12fb52eaa1b75da2e757b6e6cf8e9dc7aadf3bda31231f4] <==
	I1129 08:34:36.225205       1 server_linux.go:53] "Using iptables proxy"
	I1129 08:34:36.287026       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 08:34:36.387382       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 08:34:36.387426       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1129 08:34:36.387557       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 08:34:36.406328       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 08:34:36.406383       1 server_linux.go:132] "Using iptables Proxier"
	I1129 08:34:36.411274       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 08:34:36.411634       1 server.go:527] "Version info" version="v1.34.1"
	I1129 08:34:36.411676       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 08:34:36.412887       1 config.go:200] "Starting service config controller"
	I1129 08:34:36.412909       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 08:34:36.413056       1 config.go:106] "Starting endpoint slice config controller"
	I1129 08:34:36.413077       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 08:34:36.413090       1 config.go:309] "Starting node config controller"
	I1129 08:34:36.413100       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 08:34:36.413106       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 08:34:36.413115       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 08:34:36.413121       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 08:34:36.513443       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 08:34:36.513504       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 08:34:36.513518       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [df3aac3d3638a0cecce5558f3a1689d786f12c074373a69a2ed1d7499eb0468e] <==
	I1129 08:35:11.562735       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1129 08:35:11.563632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-137675&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 08:35:12.898463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-137675&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 08:35:14.806808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-137675&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 08:35:19.207359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-137675&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1129 08:35:28.963608       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 08:35:28.963643       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1129 08:35:28.963719       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 08:35:28.984301       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 08:35:28.984353       1 server_linux.go:132] "Using iptables Proxier"
	I1129 08:35:28.989916       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 08:35:28.990345       1 server.go:527] "Version info" version="v1.34.1"
	I1129 08:35:28.990377       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 08:35:28.991655       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 08:35:28.991693       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 08:35:28.991690       1 config.go:200] "Starting service config controller"
	I1129 08:35:28.991714       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 08:35:28.991725       1 config.go:106] "Starting endpoint slice config controller"
	I1129 08:35:28.991750       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 08:35:28.991769       1 config.go:309] "Starting node config controller"
	I1129 08:35:28.991779       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 08:35:29.091895       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 08:35:29.091919       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 08:35:29.091897       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 08:35:29.092159       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [29d2e0ccf30eb9744e0080eb659790f75c35aafff2af5075866bd4c976c52131] <==
	E1129 08:34:27.806654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 08:34:27.806948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 08:34:27.806969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 08:34:27.807012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 08:34:27.806893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 08:34:27.807020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 08:34:28.617482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 08:34:28.642728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 08:34:28.703251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 08:34:28.707340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 08:34:28.714399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 08:34:28.778336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 08:34:28.801670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 08:34:28.965291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 08:34:28.987429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 08:34:28.998469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 08:34:29.000474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 08:34:29.170695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1129 08:34:32.203541       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 08:35:21.255214       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 08:35:21.255278       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1129 08:35:21.255289       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1129 08:35:21.255314       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1129 08:35:21.255324       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1129 08:35:21.255350       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8de2914f87754b77f9721093ae60c16088289a7877e8309a5afe641886ab1f3d] <==
	I1129 08:35:23.925038       1 serving.go:386] Generated self-signed cert in-memory
	W1129 08:35:25.170320       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 08:35:25.170366       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1129 08:35:25.170380       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 08:35:25.170389       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 08:35:25.188579       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 08:35:25.188602       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 08:35:25.190406       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 08:35:25.190440       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 08:35:25.190704       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 08:35:25.190927       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 08:35:25.291322       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 08:43:20 functional-137675 kubelet[4105]: E1129 08:43:20.970236    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7rhkc" podUID="618a56c9-6f4d-41f2-bd5f-2305c4f48474"
	Nov 29 08:43:22 functional-137675 kubelet[4105]: E1129 08:43:22.970928    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-gr27b" podUID="3405ee91-2796-4913-a44e-f585682add9e"
	Nov 29 08:43:31 functional-137675 kubelet[4105]: E1129 08:43:31.970137    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7rhkc" podUID="618a56c9-6f4d-41f2-bd5f-2305c4f48474"
	Nov 29 08:43:34 functional-137675 kubelet[4105]: E1129 08:43:34.970341    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-gr27b" podUID="3405ee91-2796-4913-a44e-f585682add9e"
	Nov 29 08:43:44 functional-137675 kubelet[4105]: E1129 08:43:44.970219    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7rhkc" podUID="618a56c9-6f4d-41f2-bd5f-2305c4f48474"
	Nov 29 08:43:46 functional-137675 kubelet[4105]: E1129 08:43:46.970641    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-gr27b" podUID="3405ee91-2796-4913-a44e-f585682add9e"
	Nov 29 08:43:58 functional-137675 kubelet[4105]: E1129 08:43:58.969562    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-gr27b" podUID="3405ee91-2796-4913-a44e-f585682add9e"
	Nov 29 08:43:58 functional-137675 kubelet[4105]: E1129 08:43:58.969630    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7rhkc" podUID="618a56c9-6f4d-41f2-bd5f-2305c4f48474"
	Nov 29 08:44:10 functional-137675 kubelet[4105]: E1129 08:44:10.969831    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-gr27b" podUID="3405ee91-2796-4913-a44e-f585682add9e"
	Nov 29 08:44:13 functional-137675 kubelet[4105]: E1129 08:44:13.969670    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7rhkc" podUID="618a56c9-6f4d-41f2-bd5f-2305c4f48474"
	Nov 29 08:44:25 functional-137675 kubelet[4105]: E1129 08:44:25.970299    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-gr27b" podUID="3405ee91-2796-4913-a44e-f585682add9e"
	Nov 29 08:44:27 functional-137675 kubelet[4105]: E1129 08:44:27.969985    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7rhkc" podUID="618a56c9-6f4d-41f2-bd5f-2305c4f48474"
	Nov 29 08:44:39 functional-137675 kubelet[4105]: E1129 08:44:39.969362    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-gr27b" podUID="3405ee91-2796-4913-a44e-f585682add9e"
	Nov 29 08:44:41 functional-137675 kubelet[4105]: E1129 08:44:41.969899    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7rhkc" podUID="618a56c9-6f4d-41f2-bd5f-2305c4f48474"
	Nov 29 08:44:54 functional-137675 kubelet[4105]: E1129 08:44:54.969998    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-gr27b" podUID="3405ee91-2796-4913-a44e-f585682add9e"
	Nov 29 08:44:56 functional-137675 kubelet[4105]: E1129 08:44:56.969958    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7rhkc" podUID="618a56c9-6f4d-41f2-bd5f-2305c4f48474"
	Nov 29 08:45:05 functional-137675 kubelet[4105]: E1129 08:45:05.969322    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-gr27b" podUID="3405ee91-2796-4913-a44e-f585682add9e"
	Nov 29 08:45:08 functional-137675 kubelet[4105]: E1129 08:45:08.969577    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7rhkc" podUID="618a56c9-6f4d-41f2-bd5f-2305c4f48474"
	Nov 29 08:45:16 functional-137675 kubelet[4105]: E1129 08:45:16.969880    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-gr27b" podUID="3405ee91-2796-4913-a44e-f585682add9e"
	Nov 29 08:45:23 functional-137675 kubelet[4105]: E1129 08:45:23.969547    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7rhkc" podUID="618a56c9-6f4d-41f2-bd5f-2305c4f48474"
	Nov 29 08:45:27 functional-137675 kubelet[4105]: E1129 08:45:27.970170    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-gr27b" podUID="3405ee91-2796-4913-a44e-f585682add9e"
	Nov 29 08:45:37 functional-137675 kubelet[4105]: E1129 08:45:37.970270    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7rhkc" podUID="618a56c9-6f4d-41f2-bd5f-2305c4f48474"
	Nov 29 08:45:38 functional-137675 kubelet[4105]: E1129 08:45:38.970644    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-gr27b" podUID="3405ee91-2796-4913-a44e-f585682add9e"
	Nov 29 08:45:48 functional-137675 kubelet[4105]: E1129 08:45:48.970384    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7rhkc" podUID="618a56c9-6f4d-41f2-bd5f-2305c4f48474"
	Nov 29 08:45:53 functional-137675 kubelet[4105]: E1129 08:45:53.969772    4105 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-gr27b" podUID="3405ee91-2796-4913-a44e-f585682add9e"
	
	
	==> kubernetes-dashboard [0a3df97340e9ad6bbccc1a4a22284cc96684567f2cc494e37da3625b0b058106] <==
	2025/11/29 08:36:15 Starting overwatch
	2025/11/29 08:36:15 Using namespace: kubernetes-dashboard
	2025/11/29 08:36:15 Using in-cluster config to connect to apiserver
	2025/11/29 08:36:15 Using secret token for csrf signing
	2025/11/29 08:36:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 08:36:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 08:36:15 Successful initial request to the apiserver, version: v1.34.1
	2025/11/29 08:36:15 Generating JWE encryption key
	2025/11/29 08:36:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 08:36:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 08:36:16 Initializing JWE encryption key from synchronized object
	2025/11/29 08:36:16 Creating in-cluster Sidecar client
	2025/11/29 08:36:16 Successful request to sidecar
	2025/11/29 08:36:16 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [2c1932f726229efa4c43753f82088ada1d67509c90536df37773dddc826338a3] <==
	W1129 08:34:47.279663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:47.285440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 08:34:47.378282       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-137675_2a2bc034-3ead-4ca2-a5fb-af56243e917a!
	W1129 08:34:49.288656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:49.292420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:51.300256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:51.305579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:53.310611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:53.314692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:55.317595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:55.322088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:57.325734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:57.329448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:59.332961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:59.338671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:35:01.351112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:35:01.366340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:35:03.369717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:35:03.374037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:35:05.377481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:35:05.381272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:35:07.385152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:35:07.390476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:35:09.393691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:35:09.399218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [82638469a6dbf2e690f81f8180ce1b26f5ee873ca5e5f23aec91fe7da0610192] <==
	W1129 08:45:30.410707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:32.413935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:32.418396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:34.421545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:34.426360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:36.429510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:36.433696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:38.436174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:38.441077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:40.443897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:40.447771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:42.451011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:42.454781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:44.457715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:44.462274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:46.465040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:46.468734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:48.471942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:48.476677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:50.479691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:50.483672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:52.486540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:52.491177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:54.494256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:45:54.497958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-137675 -n functional-137675
helpers_test.go:269: (dbg) Run:  kubectl --context functional-137675 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-gr27b hello-node-connect-7d85dfc575-7rhkc
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-137675 describe pod busybox-mount hello-node-75c85bcc94-gr27b hello-node-connect-7d85dfc575-7rhkc
helpers_test.go:290: (dbg) kubectl --context functional-137675 describe pod busybox-mount hello-node-75c85bcc94-gr27b hello-node-connect-7d85dfc575-7rhkc:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-137675/192.168.49.2
	Start Time:       Sat, 29 Nov 2025 08:36:02 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://7375a87efa0ebc1c1d432b216d8c332210c7a1889917c6112e08fe6daa52ae59
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 29 Nov 2025 08:36:04 +0000
	      Finished:     Sat, 29 Nov 2025 08:36:04 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tjp96 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-tjp96:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m53s  default-scheduler  Successfully assigned default/busybox-mount to functional-137675
	  Normal  Pulling    9m53s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m51s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.195s (1.195s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m51s  kubelet            Created container: mount-munger
	  Normal  Started    9m51s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-gr27b
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-137675/192.168.49.2
	Start Time:       Sat, 29 Nov 2025 08:35:55 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kwxf7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kwxf7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-gr27b to functional-137675
	  Normal   Pulling    6m56s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m56s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m56s (x5 over 10m)     kubelet            Error: ErrImagePull
	  Warning  Failed     4m56s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m44s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-7rhkc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-137675/192.168.49.2
	Start Time:       Sat, 29 Nov 2025 08:35:53 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mcptt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mcptt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7rhkc to functional-137675
	  Normal   Pulling    7m5s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m5s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m5s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m57s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m46s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-137675 image ls --format short --alsologtostderr: (2.287170686s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-137675 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-137675 image ls --format short --alsologtostderr:
I1129 08:36:19.929671   48528 out.go:360] Setting OutFile to fd 1 ...
I1129 08:36:19.929967   48528 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:36:19.929979   48528 out.go:374] Setting ErrFile to fd 2...
I1129 08:36:19.929986   48528 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:36:19.930200   48528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
I1129 08:36:19.930742   48528 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:36:19.930885   48528 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:36:19.931343   48528 cli_runner.go:164] Run: docker container inspect functional-137675 --format={{.State.Status}}
I1129 08:36:19.952039   48528 ssh_runner.go:195] Run: systemctl --version
I1129 08:36:19.952114   48528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-137675
I1129 08:36:19.975084   48528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/functional-137675/id_rsa Username:docker}
I1129 08:36:20.085765   48528 ssh_runner.go:195] Run: sudo crictl images --output json
I1129 08:36:22.117242   48528 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.03143937s)
W1129 08:36:22.117336   48528 cache_images.go:736] Failed to list images for profile functional-137675 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1129 08:36:22.114599    7240 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="image:{}"
time="2025-11-29T08:36:22Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image load --daemon kicbase/echo-server:functional-137675 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-137675" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image load --daemon kicbase/echo-server:functional-137675 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-137675 image load --daemon kicbase/echo-server:functional-137675 --alsologtostderr: (1.003006671s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-137675" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-137675
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image load --daemon kicbase/echo-server:functional-137675 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-137675" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image save kicbase/echo-server:functional-137675 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1129 08:35:54.744050   43427 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:35:54.744365   43427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:35:54.744377   43427 out.go:374] Setting ErrFile to fd 2...
	I1129 08:35:54.744384   43427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:35:54.744578   43427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:35:54.745179   43427 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:35:54.745301   43427 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:35:54.745738   43427 cli_runner.go:164] Run: docker container inspect functional-137675 --format={{.State.Status}}
	I1129 08:35:54.764341   43427 ssh_runner.go:195] Run: systemctl --version
	I1129 08:35:54.764409   43427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-137675
	I1129 08:35:54.783044   43427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/functional-137675/id_rsa Username:docker}
	I1129 08:35:54.882601   43427 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1129 08:35:54.882671   43427 cache_images.go:255] Failed to load cached images for "functional-137675": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1129 08:35:54.882710   43427 cache_images.go:267] failed pushing to: functional-137675

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-137675
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image save --daemon kicbase/echo-server:functional-137675 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-137675
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-137675: exit status 1 (17.628229ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-137675

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-137675

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-137675 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-137675 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-gr27b" [3405ee91-2796-4913-a44e-f585682add9e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-137675 -n functional-137675
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-29 08:45:55.705792381 +0000 UTC m=+1065.865074052
functional_test.go:1460: (dbg) Run:  kubectl --context functional-137675 describe po hello-node-75c85bcc94-gr27b -n default
functional_test.go:1460: (dbg) kubectl --context functional-137675 describe po hello-node-75c85bcc94-gr27b -n default:
Name:             hello-node-75c85bcc94-gr27b
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-137675/192.168.49.2
Start Time:       Sat, 29 Nov 2025 08:35:55 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kwxf7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kwxf7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-gr27b to functional-137675
Normal   Pulling    6m56s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m56s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m56s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-137675 logs hello-node-75c85bcc94-gr27b -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-137675 logs hello-node-75c85bcc94-gr27b -n default: exit status 1 (66.940755ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-gr27b" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-137675 logs hello-node-75c85bcc94-gr27b -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-137675 service --namespace=default --https --url hello-node: exit status 115 (530.755573ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30329
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-137675 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-137675 service hello-node --url --format={{.IP}}: exit status 115 (533.738924ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-137675 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-137675 service hello-node --url: exit status 115 (551.299648ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30329
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-137675 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30329
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.33s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-169580 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-169580 --output=json --user=testUser: exit status 80 (2.332037508s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"03059d97-b3be-46f1-a7bb-01af4a6b39dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-169580 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"ad1beddc-2a34-4c39-96aa-c89bc164ff31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-29T08:55:00Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"23f4ba7e-f9d5-4a2c-aead-2a9388023eba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-169580 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.33s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-169580 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-169580 --output=json --user=testUser: exit status 80 (1.580280209s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"58fb2be7-d917-4039-b0df-6ea066016023","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-169580 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"dfc296cb-0313-4fee-affb-ce6c44080bbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-29T08:55:01Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"e9541039-16e9-4e49-8651-b9eee3ca866a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-169580 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.58s)

                                                
                                    
x
+
TestPause/serial/Pause (5.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-295501 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-295501 --alsologtostderr -v=5: exit status 80 (2.108579368s)

                                                
                                                
-- stdout --
	* Pausing node pause-295501 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:11:42.755861  239663 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:11:42.756186  239663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:11:42.756196  239663 out.go:374] Setting ErrFile to fd 2...
	I1129 09:11:42.756200  239663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:11:42.756407  239663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:11:42.756653  239663 out.go:368] Setting JSON to false
	I1129 09:11:42.756670  239663 mustload.go:66] Loading cluster: pause-295501
	I1129 09:11:42.757063  239663 config.go:182] Loaded profile config "pause-295501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:11:42.757500  239663 cli_runner.go:164] Run: docker container inspect pause-295501 --format={{.State.Status}}
	I1129 09:11:42.776637  239663 host.go:66] Checking if "pause-295501" exists ...
	I1129 09:11:42.777018  239663 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:11:42.833394  239663 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-29 09:11:42.822713787 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:11:42.834001  239663 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-295501 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1129 09:11:42.835909  239663 out.go:179] * Pausing node pause-295501 ... 
	I1129 09:11:42.836955  239663 host.go:66] Checking if "pause-295501" exists ...
	I1129 09:11:42.837231  239663 ssh_runner.go:195] Run: systemctl --version
	I1129 09:11:42.837278  239663 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:42.856112  239663 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/pause-295501/id_rsa Username:docker}
	I1129 09:11:42.957739  239663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:11:42.970737  239663 pause.go:52] kubelet running: true
	I1129 09:11:42.970816  239663 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:11:43.106831  239663 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:11:43.106942  239663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:11:43.177634  239663 cri.go:89] found id: "a8bbbdfd51dab82a8d563b57b00d7ca2194f16c69e6c25030b49a452fa24c721"
	I1129 09:11:43.177662  239663 cri.go:89] found id: "f78ddfecb898a10860947cd0138c0c91e432eb8133b9c1199e2378046faeefb6"
	I1129 09:11:43.177668  239663 cri.go:89] found id: "092bad9397a64f5518143503ccfeb6661abcd1fb66cf16f31703c648078497fe"
	I1129 09:11:43.177674  239663 cri.go:89] found id: "a2578393c04aeba954d20c8ac71275220f67953d2fd43b6e378182a2c47660e2"
	I1129 09:11:43.177679  239663 cri.go:89] found id: "a1cfbd7b390ca764419e13459208f92ccab83b9f560dab98155c947b575c9eb7"
	I1129 09:11:43.177684  239663 cri.go:89] found id: "45e32d34386d74e83368055caa5eb9f063ed6013f8c4cd7c8a1fbf290b1d66ef"
	I1129 09:11:43.177689  239663 cri.go:89] found id: "249d465154ffd1c8223cffce99d25f392334b77035bb4fd7a68ea732b0d1ffaa"
	I1129 09:11:43.177694  239663 cri.go:89] found id: ""
	I1129 09:11:43.177743  239663 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:11:43.190811  239663 retry.go:31] will retry after 267.590788ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:11:43Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:11:43.459354  239663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:11:43.475615  239663 pause.go:52] kubelet running: false
	I1129 09:11:43.475674  239663 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:11:43.618187  239663 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:11:43.618288  239663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:11:43.706017  239663 cri.go:89] found id: "a8bbbdfd51dab82a8d563b57b00d7ca2194f16c69e6c25030b49a452fa24c721"
	I1129 09:11:43.706039  239663 cri.go:89] found id: "f78ddfecb898a10860947cd0138c0c91e432eb8133b9c1199e2378046faeefb6"
	I1129 09:11:43.706060  239663 cri.go:89] found id: "092bad9397a64f5518143503ccfeb6661abcd1fb66cf16f31703c648078497fe"
	I1129 09:11:43.706064  239663 cri.go:89] found id: "a2578393c04aeba954d20c8ac71275220f67953d2fd43b6e378182a2c47660e2"
	I1129 09:11:43.706067  239663 cri.go:89] found id: "a1cfbd7b390ca764419e13459208f92ccab83b9f560dab98155c947b575c9eb7"
	I1129 09:11:43.706070  239663 cri.go:89] found id: "45e32d34386d74e83368055caa5eb9f063ed6013f8c4cd7c8a1fbf290b1d66ef"
	I1129 09:11:43.706073  239663 cri.go:89] found id: "249d465154ffd1c8223cffce99d25f392334b77035bb4fd7a68ea732b0d1ffaa"
	I1129 09:11:43.706076  239663 cri.go:89] found id: ""
	I1129 09:11:43.706115  239663 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:11:43.720566  239663 retry.go:31] will retry after 333.192483ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:11:43Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:11:44.054000  239663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:11:44.068087  239663 pause.go:52] kubelet running: false
	I1129 09:11:44.068166  239663 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:11:44.200287  239663 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:11:44.200369  239663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:11:44.277789  239663 cri.go:89] found id: "a8bbbdfd51dab82a8d563b57b00d7ca2194f16c69e6c25030b49a452fa24c721"
	I1129 09:11:44.277812  239663 cri.go:89] found id: "f78ddfecb898a10860947cd0138c0c91e432eb8133b9c1199e2378046faeefb6"
	I1129 09:11:44.277816  239663 cri.go:89] found id: "092bad9397a64f5518143503ccfeb6661abcd1fb66cf16f31703c648078497fe"
	I1129 09:11:44.277820  239663 cri.go:89] found id: "a2578393c04aeba954d20c8ac71275220f67953d2fd43b6e378182a2c47660e2"
	I1129 09:11:44.277823  239663 cri.go:89] found id: "a1cfbd7b390ca764419e13459208f92ccab83b9f560dab98155c947b575c9eb7"
	I1129 09:11:44.277826  239663 cri.go:89] found id: "45e32d34386d74e83368055caa5eb9f063ed6013f8c4cd7c8a1fbf290b1d66ef"
	I1129 09:11:44.277829  239663 cri.go:89] found id: "249d465154ffd1c8223cffce99d25f392334b77035bb4fd7a68ea732b0d1ffaa"
	I1129 09:11:44.277831  239663 cri.go:89] found id: ""
	I1129 09:11:44.277895  239663 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:11:44.290721  239663 retry.go:31] will retry after 292.30989ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:11:44Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:11:44.583248  239663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:11:44.597100  239663 pause.go:52] kubelet running: false
	I1129 09:11:44.597159  239663 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:11:44.709926  239663 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:11:44.710002  239663 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:11:44.780942  239663 cri.go:89] found id: "a8bbbdfd51dab82a8d563b57b00d7ca2194f16c69e6c25030b49a452fa24c721"
	I1129 09:11:44.780967  239663 cri.go:89] found id: "f78ddfecb898a10860947cd0138c0c91e432eb8133b9c1199e2378046faeefb6"
	I1129 09:11:44.780972  239663 cri.go:89] found id: "092bad9397a64f5518143503ccfeb6661abcd1fb66cf16f31703c648078497fe"
	I1129 09:11:44.780975  239663 cri.go:89] found id: "a2578393c04aeba954d20c8ac71275220f67953d2fd43b6e378182a2c47660e2"
	I1129 09:11:44.780978  239663 cri.go:89] found id: "a1cfbd7b390ca764419e13459208f92ccab83b9f560dab98155c947b575c9eb7"
	I1129 09:11:44.780981  239663 cri.go:89] found id: "45e32d34386d74e83368055caa5eb9f063ed6013f8c4cd7c8a1fbf290b1d66ef"
	I1129 09:11:44.780983  239663 cri.go:89] found id: "249d465154ffd1c8223cffce99d25f392334b77035bb4fd7a68ea732b0d1ffaa"
	I1129 09:11:44.780987  239663 cri.go:89] found id: ""
	I1129 09:11:44.781029  239663 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:11:44.796716  239663 out.go:203] 
	W1129 09:11:44.797973  239663 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:11:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:11:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:11:44.797990  239663 out.go:285] * 
	* 
	W1129 09:11:44.801801  239663 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:11:44.803228  239663 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-295501 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-295501
helpers_test.go:243: (dbg) docker inspect pause-295501:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "968421c7df9896ef130cafb07722b48c470caea9e9ac404061d61d0ab4c7c407",
	        "Created": "2025-11-29T09:11:00.300950233Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 231486,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:11:00.343851267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/968421c7df9896ef130cafb07722b48c470caea9e9ac404061d61d0ab4c7c407/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/968421c7df9896ef130cafb07722b48c470caea9e9ac404061d61d0ab4c7c407/hostname",
	        "HostsPath": "/var/lib/docker/containers/968421c7df9896ef130cafb07722b48c470caea9e9ac404061d61d0ab4c7c407/hosts",
	        "LogPath": "/var/lib/docker/containers/968421c7df9896ef130cafb07722b48c470caea9e9ac404061d61d0ab4c7c407/968421c7df9896ef130cafb07722b48c470caea9e9ac404061d61d0ab4c7c407-json.log",
	        "Name": "/pause-295501",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-295501:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-295501",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "968421c7df9896ef130cafb07722b48c470caea9e9ac404061d61d0ab4c7c407",
	                "LowerDir": "/var/lib/docker/overlay2/d540506d9a4ed58bf099fcdf8789b1803b6bbb4ef9fbbce681522acf8ad7cd02-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d540506d9a4ed58bf099fcdf8789b1803b6bbb4ef9fbbce681522acf8ad7cd02/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d540506d9a4ed58bf099fcdf8789b1803b6bbb4ef9fbbce681522acf8ad7cd02/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d540506d9a4ed58bf099fcdf8789b1803b6bbb4ef9fbbce681522acf8ad7cd02/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-295501",
	                "Source": "/var/lib/docker/volumes/pause-295501/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-295501",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-295501",
	                "name.minikube.sigs.k8s.io": "pause-295501",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "321ef45b004e804139b1abab70942530f7f196de1f321290e0e3fbd69ffc8967",
	            "SandboxKey": "/var/run/docker/netns/321ef45b004e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-295501": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "963a9ee72cd35456162a3b76175450bb7d96a82608b4e2b95e3bbc0ccdc222ec",
	                    "EndpointID": "cea62c5f5e64e81855149fe2affc5280d0a44794cbe5ec428fefb5b54c830066",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "d2:81:07:6a:57:66",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-295501",
	                        "968421c7df98"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-295501 -n pause-295501
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-295501 -n pause-295501: exit status 2 (337.493007ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-295501 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-628644 sudo cat /lib/systemd/system/containerd.service                                                                         │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ ssh     │ -p cilium-628644 sudo cat /etc/containerd/config.toml                                                                                    │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ ssh     │ -p cilium-628644 sudo containerd config dump                                                                                             │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ ssh     │ -p cilium-628644 sudo systemctl status crio --all --full --no-pager                                                                      │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ ssh     │ -p cilium-628644 sudo systemctl cat crio --no-pager                                                                                      │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ ssh     │ -p cilium-628644 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ ssh     │ -p cilium-628644 sudo crio config                                                                                                        │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ delete  │ -p cilium-628644                                                                                                                         │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:08 UTC │
	│ start   │ -p running-upgrade-246907 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-246907    │ jenkins │ v1.35.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:08 UTC │
	│ ssh     │ cert-options-207443 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                              │ cert-options-207443       │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:08 UTC │
	│ ssh     │ -p cert-options-207443 -- sudo cat /etc/kubernetes/admin.conf                                                                            │ cert-options-207443       │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:08 UTC │
	│ delete  │ -p cert-options-207443                                                                                                                   │ cert-options-207443       │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:08 UTC │
	│ start   │ -p kubernetes-upgrade-665137 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-665137 │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:08 UTC │
	│ delete  │ -p force-systemd-env-076374                                                                                                              │ force-systemd-env-076374  │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:08 UTC │
	│ start   │ -p stopped-upgrade-355524 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-355524    │ jenkins │ v1.35.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:09 UTC │
	│ start   │ -p running-upgrade-246907 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-246907    │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-665137                                                                                                             │ kubernetes-upgrade-665137 │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:09 UTC │
	│ start   │ -p kubernetes-upgrade-665137 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-665137 │ jenkins │ v1.37.0 │ 29 Nov 25 09:09 UTC │                     │
	│ stop    │ stopped-upgrade-355524 stop                                                                                                              │ stopped-upgrade-355524    │ jenkins │ v1.35.0 │ 29 Nov 25 09:09 UTC │ 29 Nov 25 09:09 UTC │
	│ start   │ -p stopped-upgrade-355524 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-355524    │ jenkins │ v1.37.0 │ 29 Nov 25 09:09 UTC │                     │
	│ start   │ -p cert-expiration-836438 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                │ cert-expiration-836438    │ jenkins │ v1.37.0 │ 29 Nov 25 09:10 UTC │ 29 Nov 25 09:10 UTC │
	│ delete  │ -p cert-expiration-836438                                                                                                                │ cert-expiration-836438    │ jenkins │ v1.37.0 │ 29 Nov 25 09:10 UTC │ 29 Nov 25 09:10 UTC │
	│ start   │ -p pause-295501 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-295501              │ jenkins │ v1.37.0 │ 29 Nov 25 09:10 UTC │ 29 Nov 25 09:11 UTC │
	│ start   │ -p pause-295501 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-295501              │ jenkins │ v1.37.0 │ 29 Nov 25 09:11 UTC │ 29 Nov 25 09:11 UTC │
	│ pause   │ -p pause-295501 --alsologtostderr -v=5                                                                                                   │ pause-295501              │ jenkins │ v1.37.0 │ 29 Nov 25 09:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:11:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:11:36.610088  237717 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:11:36.610439  237717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:11:36.610452  237717 out.go:374] Setting ErrFile to fd 2...
	I1129 09:11:36.610460  237717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:11:36.610820  237717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:11:36.611460  237717 out.go:368] Setting JSON to false
	I1129 09:11:36.613045  237717 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3249,"bootTime":1764404248,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:11:36.613135  237717 start.go:143] virtualization: kvm guest
	I1129 09:11:36.615093  237717 out.go:179] * [pause-295501] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:11:36.616388  237717 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:11:36.616426  237717 notify.go:221] Checking for updates...
	I1129 09:11:36.618598  237717 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:11:36.619860  237717 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:11:36.621448  237717 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:11:36.622616  237717 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:11:36.623732  237717 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:11:36.625313  237717 config.go:182] Loaded profile config "pause-295501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:11:36.626247  237717 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:11:36.653610  237717 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:11:36.653719  237717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:11:36.719315  237717 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-29 09:11:36.70843075 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:11:36.719424  237717 docker.go:319] overlay module found
	I1129 09:11:36.721192  237717 out.go:179] * Using the docker driver based on existing profile
	I1129 09:11:36.722374  237717 start.go:309] selected driver: docker
	I1129 09:11:36.722392  237717 start.go:927] validating driver "docker" against &{Name:pause-295501 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-295501 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:11:36.722533  237717 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:11:36.722636  237717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:11:36.785791  237717 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-29 09:11:36.773057508 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:11:36.786768  237717 cni.go:84] Creating CNI manager for ""
	I1129 09:11:36.786872  237717 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:11:36.786939  237717 start.go:353] cluster config:
	{Name:pause-295501 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-295501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:11:36.788935  237717 out.go:179] * Starting "pause-295501" primary control-plane node in "pause-295501" cluster
	I1129 09:11:36.789988  237717 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:11:36.791200  237717 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:11:36.792278  237717 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:11:36.792323  237717 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:11:36.792349  237717 cache.go:65] Caching tarball of preloaded images
	I1129 09:11:36.792389  237717 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:11:36.792464  237717 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:11:36.792481  237717 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:11:36.792645  237717 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/config.json ...
	I1129 09:11:36.820389  237717 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:11:36.820409  237717 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:11:36.820426  237717 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:11:36.820460  237717 start.go:360] acquireMachinesLock for pause-295501: {Name:mk1ad36e18b0d7e5b2ef49f75a67ac102a990d08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:11:36.820525  237717 start.go:364] duration metric: took 40.962µs to acquireMachinesLock for "pause-295501"
	I1129 09:11:36.820545  237717 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:11:36.820555  237717 fix.go:54] fixHost starting: 
	I1129 09:11:36.820810  237717 cli_runner.go:164] Run: docker container inspect pause-295501 --format={{.State.Status}}
	I1129 09:11:36.842595  237717 fix.go:112] recreateIfNeeded on pause-295501: state=Running err=<nil>
	W1129 09:11:36.842635  237717 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 09:11:33.611728  218317 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1129 09:11:33.612190  218317 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1129 09:11:33.612251  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:33.612307  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:33.641602  218317 cri.go:89] found id: "ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:33.641629  218317 cri.go:89] found id: ""
	I1129 09:11:33.641640  218317 logs.go:282] 1 containers: [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8]
	I1129 09:11:33.641701  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:33.646003  218317 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:33.646083  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:33.674768  218317 cri.go:89] found id: ""
	I1129 09:11:33.674791  218317 logs.go:282] 0 containers: []
	W1129 09:11:33.674799  218317 logs.go:284] No container was found matching "etcd"
	I1129 09:11:33.674805  218317 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:33.674875  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:33.702117  218317 cri.go:89] found id: ""
	I1129 09:11:33.702142  218317 logs.go:282] 0 containers: []
	W1129 09:11:33.702152  218317 logs.go:284] No container was found matching "coredns"
	I1129 09:11:33.702160  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:33.702222  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:33.731396  218317 cri.go:89] found id: "d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:33.731418  218317 cri.go:89] found id: ""
	I1129 09:11:33.731428  218317 logs.go:282] 1 containers: [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99]
	I1129 09:11:33.731485  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:33.735479  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:33.735541  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:33.768572  218317 cri.go:89] found id: ""
	I1129 09:11:33.768595  218317 logs.go:282] 0 containers: []
	W1129 09:11:33.768602  218317 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:11:33.768609  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:33.768654  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:33.801794  218317 cri.go:89] found id: "ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:33.801818  218317 cri.go:89] found id: ""
	I1129 09:11:33.801828  218317 logs.go:282] 1 containers: [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336]
	I1129 09:11:33.801921  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:33.806224  218317 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:33.806290  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:33.839667  218317 cri.go:89] found id: ""
	I1129 09:11:33.839698  218317 logs.go:282] 0 containers: []
	W1129 09:11:33.839710  218317 logs.go:284] No container was found matching "kindnet"
	I1129 09:11:33.839720  218317 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:33.839782  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:33.873888  218317 cri.go:89] found id: ""
	I1129 09:11:33.873914  218317 logs.go:282] 0 containers: []
	W1129 09:11:33.873923  218317 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:11:33.873931  218317 logs.go:123] Gathering logs for kube-controller-manager [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336] ...
	I1129 09:11:33.873944  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:33.908306  218317 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:33.908335  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:33.971487  218317 logs.go:123] Gathering logs for container status ...
	I1129 09:11:33.971527  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:34.006765  218317 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:34.006798  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:34.101323  218317 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:34.101363  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:34.118485  218317 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:34.118517  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:34.184350  218317 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:34.184374  218317 logs.go:123] Gathering logs for kube-apiserver [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8] ...
	I1129 09:11:34.184390  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:34.219737  218317 logs.go:123] Gathering logs for kube-scheduler [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99] ...
	I1129 09:11:34.219778  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:36.772926  218317 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1129 09:11:36.773389  218317 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1129 09:11:36.773446  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:36.773508  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:36.807081  218317 cri.go:89] found id: "ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:36.807123  218317 cri.go:89] found id: ""
	I1129 09:11:36.807135  218317 logs.go:282] 1 containers: [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8]
	I1129 09:11:36.807200  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.812020  218317 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:36.812103  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:36.844173  218317 cri.go:89] found id: ""
	I1129 09:11:36.844199  218317 logs.go:282] 0 containers: []
	W1129 09:11:36.844212  218317 logs.go:284] No container was found matching "etcd"
	I1129 09:11:36.844219  218317 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:36.844277  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:36.875731  218317 cri.go:89] found id: ""
	I1129 09:11:36.875761  218317 logs.go:282] 0 containers: []
	W1129 09:11:36.875780  218317 logs.go:284] No container was found matching "coredns"
	I1129 09:11:36.875788  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:36.875863  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:36.906603  218317 cri.go:89] found id: "d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:36.906628  218317 cri.go:89] found id: ""
	I1129 09:11:36.906637  218317 logs.go:282] 1 containers: [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99]
	I1129 09:11:36.906695  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.910792  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:36.910889  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:36.942155  218317 cri.go:89] found id: ""
	I1129 09:11:36.942187  218317 logs.go:282] 0 containers: []
	W1129 09:11:36.942199  218317 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:11:36.942207  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:36.942269  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:36.971467  218317 cri.go:89] found id: "ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:36.971494  218317 cri.go:89] found id: ""
	I1129 09:11:36.971503  218317 logs.go:282] 1 containers: [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336]
	I1129 09:11:36.971571  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.975573  218317 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:36.975642  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:37.004655  218317 cri.go:89] found id: ""
	I1129 09:11:37.004685  218317 logs.go:282] 0 containers: []
	W1129 09:11:37.004694  218317 logs.go:284] No container was found matching "kindnet"
	I1129 09:11:37.004700  218317 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:37.004761  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:35.825079  214471 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1129 09:11:35.825592  214471 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1129 09:11:35.825668  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:35.825734  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:35.862164  214471 cri.go:89] found id: "1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d"
	I1129 09:11:35.862191  214471 cri.go:89] found id: ""
	I1129 09:11:35.862199  214471 logs.go:282] 1 containers: [1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d]
	I1129 09:11:35.862244  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:35.866190  214471 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:35.866246  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:35.902809  214471 cri.go:89] found id: "6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0"
	I1129 09:11:35.902835  214471 cri.go:89] found id: ""
	I1129 09:11:35.902857  214471 logs.go:282] 1 containers: [6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0]
	I1129 09:11:35.902914  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:35.906893  214471 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:35.906958  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:35.942933  214471 cri.go:89] found id: ""
	I1129 09:11:35.942967  214471 logs.go:282] 0 containers: []
	W1129 09:11:35.942976  214471 logs.go:284] No container was found matching "coredns"
	I1129 09:11:35.942982  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:35.943035  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:35.979802  214471 cri.go:89] found id: "fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad"
	I1129 09:11:35.979822  214471 cri.go:89] found id: "fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7"
	I1129 09:11:35.979826  214471 cri.go:89] found id: ""
	I1129 09:11:35.979833  214471 logs.go:282] 2 containers: [fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7]
	I1129 09:11:35.979918  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:35.984011  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:35.987811  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:35.987914  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:36.023943  214471 cri.go:89] found id: "31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3"
	I1129 09:11:36.023971  214471 cri.go:89] found id: "9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3"
	I1129 09:11:36.023977  214471 cri.go:89] found id: ""
	I1129 09:11:36.023985  214471 logs.go:282] 2 containers: [31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3 9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3]
	I1129 09:11:36.024035  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.028166  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.032192  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:36.032250  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:36.068481  214471 cri.go:89] found id: "33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322"
	I1129 09:11:36.068504  214471 cri.go:89] found id: ""
	I1129 09:11:36.068512  214471 logs.go:282] 1 containers: [33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322]
	I1129 09:11:36.068570  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.072940  214471 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:36.073007  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:36.109833  214471 cri.go:89] found id: "fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19"
	I1129 09:11:36.109873  214471 cri.go:89] found id: "ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194"
	I1129 09:11:36.109879  214471 cri.go:89] found id: ""
	I1129 09:11:36.109889  214471 logs.go:282] 2 containers: [fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19 ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194]
	I1129 09:11:36.109950  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.114175  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.118096  214471 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:36.118185  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:36.153699  214471 cri.go:89] found id: "d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b"
	I1129 09:11:36.153720  214471 cri.go:89] found id: ""
	I1129 09:11:36.153729  214471 logs.go:282] 1 containers: [d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b]
	I1129 09:11:36.153787  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.157903  214471 logs.go:123] Gathering logs for kindnet [fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19] ...
	I1129 09:11:36.157926  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19"
	I1129 09:11:36.202462  214471 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:36.202504  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:36.297721  214471 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:36.297763  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:36.360967  214471 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:36.360991  214471 logs.go:123] Gathering logs for kube-controller-manager [33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322] ...
	I1129 09:11:36.361003  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322"
	I1129 09:11:36.397247  214471 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:36.397278  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:36.460996  214471 logs.go:123] Gathering logs for container status ...
	I1129 09:11:36.461039  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:36.508700  214471 logs.go:123] Gathering logs for kube-proxy [31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3] ...
	I1129 09:11:36.508733  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3"
	I1129 09:11:36.561438  214471 logs.go:123] Gathering logs for kube-proxy [9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3] ...
	I1129 09:11:36.561480  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3"
	I1129 09:11:36.604632  214471 logs.go:123] Gathering logs for kindnet [ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194] ...
	I1129 09:11:36.604664  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194"
	I1129 09:11:36.649526  214471 logs.go:123] Gathering logs for storage-provisioner [d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b] ...
	I1129 09:11:36.649602  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b"
	I1129 09:11:36.696684  214471 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:36.696710  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:36.715307  214471 logs.go:123] Gathering logs for kube-apiserver [1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d] ...
	I1129 09:11:36.715345  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d"
	I1129 09:11:36.765667  214471 logs.go:123] Gathering logs for etcd [6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0] ...
	I1129 09:11:36.765704  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0"
	I1129 09:11:36.824292  214471 logs.go:123] Gathering logs for kube-scheduler [fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad] ...
	I1129 09:11:36.824335  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad"
	I1129 09:11:36.905231  214471 logs.go:123] Gathering logs for kube-scheduler [fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7] ...
	I1129 09:11:36.905266  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7"
	I1129 09:11:36.844746  237717 out.go:252] * Updating the running docker "pause-295501" container ...
	I1129 09:11:36.844788  237717 machine.go:94] provisionDockerMachine start ...
	I1129 09:11:36.844903  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:36.867948  237717 main.go:143] libmachine: Using SSH client type: native
	I1129 09:11:36.868294  237717 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1129 09:11:36.868319  237717 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:11:37.022477  237717 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-295501
	
	I1129 09:11:37.022513  237717 ubuntu.go:182] provisioning hostname "pause-295501"
	I1129 09:11:37.022594  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:37.046373  237717 main.go:143] libmachine: Using SSH client type: native
	I1129 09:11:37.046738  237717 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1129 09:11:37.046761  237717 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-295501 && echo "pause-295501" | sudo tee /etc/hostname
	I1129 09:11:37.214702  237717 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-295501
	
	I1129 09:11:37.214784  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:37.237618  237717 main.go:143] libmachine: Using SSH client type: native
	I1129 09:11:37.237976  237717 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1129 09:11:37.238003  237717 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-295501' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-295501/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-295501' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:11:37.393446  237717 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:11:37.393479  237717 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:11:37.393504  237717 ubuntu.go:190] setting up certificates
	I1129 09:11:37.393518  237717 provision.go:84] configureAuth start
	I1129 09:11:37.393579  237717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-295501
	I1129 09:11:37.416151  237717 provision.go:143] copyHostCerts
	I1129 09:11:37.416229  237717 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:11:37.416252  237717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:11:37.416345  237717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:11:37.416525  237717 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:11:37.416541  237717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:11:37.416584  237717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:11:37.416690  237717 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:11:37.416702  237717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:11:37.416743  237717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:11:37.416831  237717 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.pause-295501 san=[127.0.0.1 192.168.85.2 localhost minikube pause-295501]
	I1129 09:11:37.442322  237717 provision.go:177] copyRemoteCerts
	I1129 09:11:37.442392  237717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:11:37.442441  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:37.462924  237717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/pause-295501/id_rsa Username:docker}
	I1129 09:11:37.570219  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1129 09:11:37.590962  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:11:37.611389  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:11:37.631201  237717 provision.go:87] duration metric: took 237.670256ms to configureAuth
	I1129 09:11:37.631238  237717 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:11:37.631466  237717 config.go:182] Loaded profile config "pause-295501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:11:37.631583  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:37.654340  237717 main.go:143] libmachine: Using SSH client type: native
	I1129 09:11:37.654674  237717 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1129 09:11:37.654709  237717 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:11:38.002511  237717 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:11:38.002538  237717 machine.go:97] duration metric: took 1.157738998s to provisionDockerMachine
	I1129 09:11:38.002549  237717 start.go:293] postStartSetup for "pause-295501" (driver="docker")
	I1129 09:11:38.002559  237717 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:11:38.002607  237717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:11:38.002650  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:38.022540  237717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/pause-295501/id_rsa Username:docker}
	I1129 09:11:38.127016  237717 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:11:38.130956  237717 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:11:38.130986  237717 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:11:38.130996  237717 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:11:38.131043  237717 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:11:38.131112  237717 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:11:38.131213  237717 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:11:38.139910  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:11:38.159979  237717 start.go:296] duration metric: took 157.417141ms for postStartSetup
	I1129 09:11:38.160088  237717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:11:38.160139  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:38.179660  237717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/pause-295501/id_rsa Username:docker}
	I1129 09:11:38.280587  237717 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:11:38.285818  237717 fix.go:56] duration metric: took 1.465256722s for fixHost
	I1129 09:11:38.285873  237717 start.go:83] releasing machines lock for "pause-295501", held for 1.465336471s
	I1129 09:11:38.285958  237717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-295501
	I1129 09:11:38.305486  237717 ssh_runner.go:195] Run: cat /version.json
	I1129 09:11:38.305551  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:38.305576  237717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:11:38.305676  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:38.325651  237717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/pause-295501/id_rsa Username:docker}
	I1129 09:11:38.325993  237717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/pause-295501/id_rsa Username:docker}
	I1129 09:11:38.477537  237717 ssh_runner.go:195] Run: systemctl --version
	I1129 09:11:38.484701  237717 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:11:38.523899  237717 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:11:38.529083  237717 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:11:38.529157  237717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:11:38.538226  237717 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:11:38.538256  237717 start.go:496] detecting cgroup driver to use...
	I1129 09:11:38.538291  237717 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:11:38.538338  237717 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:11:38.554313  237717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:11:38.568374  237717 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:11:38.568436  237717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:11:38.584933  237717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:11:38.598824  237717 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:11:38.718143  237717 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:11:38.831183  237717 docker.go:234] disabling docker service ...
	I1129 09:11:38.831250  237717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:11:38.846685  237717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:11:38.860696  237717 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:11:38.972575  237717 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:11:39.082673  237717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:11:39.096269  237717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:11:39.112179  237717 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:11:39.112254  237717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:11:39.122438  237717 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:11:39.122515  237717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:11:39.132764  237717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:11:39.142828  237717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:11:39.152940  237717 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:11:39.162385  237717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:11:39.172436  237717 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:11:39.181909  237717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:11:39.191543  237717 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:11:39.200199  237717 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:11:39.208660  237717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:11:39.318149  237717 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:11:39.517670  237717 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:11:39.517734  237717 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:11:39.522421  237717 start.go:564] Will wait 60s for crictl version
	I1129 09:11:39.522503  237717 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.526579  237717 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:11:39.553771  237717 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:11:39.553868  237717 ssh_runner.go:195] Run: crio --version
	I1129 09:11:39.586340  237717 ssh_runner.go:195] Run: crio --version
	I1129 09:11:39.621800  237717 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:11:39.622951  237717 cli_runner.go:164] Run: docker network inspect pause-295501 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:11:39.643640  237717 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 09:11:39.648153  237717 kubeadm.go:884] updating cluster {Name:pause-295501 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-295501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:11:39.648344  237717 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:11:39.648410  237717 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:11:39.685026  237717 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:11:39.685053  237717 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:11:39.685108  237717 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:11:39.714495  237717 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:11:39.714520  237717 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:11:39.714529  237717 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1129 09:11:39.714685  237717 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-295501 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-295501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:11:39.714769  237717 ssh_runner.go:195] Run: crio config
	I1129 09:11:39.767932  237717 cni.go:84] Creating CNI manager for ""
	I1129 09:11:39.767955  237717 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:11:39.767977  237717 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:11:39.768003  237717 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-295501 NodeName:pause-295501 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:11:39.768181  237717 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-295501"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:11:39.768252  237717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:11:39.777229  237717 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:11:39.777309  237717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:11:39.787769  237717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1129 09:11:39.803706  237717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:11:39.819138  237717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1129 09:11:39.833675  237717 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:11:39.838158  237717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:11:39.956854  237717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:11:39.973108  237717 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501 for IP: 192.168.85.2
	I1129 09:11:39.973144  237717 certs.go:195] generating shared ca certs ...
	I1129 09:11:39.973166  237717 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:11:39.973359  237717 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:11:39.973427  237717 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:11:39.973442  237717 certs.go:257] generating profile certs ...
	I1129 09:11:39.973584  237717 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/client.key
	I1129 09:11:39.973668  237717 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/apiserver.key.a9383738
	I1129 09:11:39.973742  237717 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/proxy-client.key
	I1129 09:11:39.973953  237717 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:11:39.974022  237717 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:11:39.974037  237717 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:11:39.974087  237717 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:11:39.974129  237717 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:11:39.974181  237717 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:11:39.974244  237717 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:11:39.975176  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:11:39.997039  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:11:40.020614  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:11:40.044221  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:11:40.066434  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1129 09:11:40.089309  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:11:40.112148  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:11:40.134525  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:11:40.156083  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:11:40.178153  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:11:40.199367  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:11:40.222169  237717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:11:40.236967  237717 ssh_runner.go:195] Run: openssl version
	I1129 09:11:40.244348  237717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:11:40.255410  237717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:11:40.259982  237717 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:11:40.260043  237717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:11:40.307814  237717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:11:40.318727  237717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:11:40.329742  237717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:11:40.334663  237717 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:11:40.334737  237717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:11:40.380533  237717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:11:40.391212  237717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:11:40.402988  237717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:11:40.407893  237717 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:11:40.407953  237717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:11:40.455320  237717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:11:40.465008  237717 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:11:40.469719  237717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:11:40.515488  237717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:11:40.566689  237717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:11:40.612408  237717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:11:40.658442  237717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:11:40.696605  237717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:11:40.739318  237717 kubeadm.go:401] StartCluster: {Name:pause-295501 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-295501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:11:40.739459  237717 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:11:40.739541  237717 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:11:40.771092  237717 cri.go:89] found id: "a8bbbdfd51dab82a8d563b57b00d7ca2194f16c69e6c25030b49a452fa24c721"
	I1129 09:11:40.771119  237717 cri.go:89] found id: "f78ddfecb898a10860947cd0138c0c91e432eb8133b9c1199e2378046faeefb6"
	I1129 09:11:40.771133  237717 cri.go:89] found id: "092bad9397a64f5518143503ccfeb6661abcd1fb66cf16f31703c648078497fe"
	I1129 09:11:40.771137  237717 cri.go:89] found id: "a2578393c04aeba954d20c8ac71275220f67953d2fd43b6e378182a2c47660e2"
	I1129 09:11:40.771140  237717 cri.go:89] found id: "a1cfbd7b390ca764419e13459208f92ccab83b9f560dab98155c947b575c9eb7"
	I1129 09:11:40.771143  237717 cri.go:89] found id: "45e32d34386d74e83368055caa5eb9f063ed6013f8c4cd7c8a1fbf290b1d66ef"
	I1129 09:11:40.771146  237717 cri.go:89] found id: "249d465154ffd1c8223cffce99d25f392334b77035bb4fd7a68ea732b0d1ffaa"
	I1129 09:11:40.771149  237717 cri.go:89] found id: ""
	I1129 09:11:40.771195  237717 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 09:11:40.785435  237717 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:11:40Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:11:40.785498  237717 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:11:40.794536  237717 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:11:40.794560  237717 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:11:40.794610  237717 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:11:40.803131  237717 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:11:40.803931  237717 kubeconfig.go:125] found "pause-295501" server: "https://192.168.85.2:8443"
	I1129 09:11:40.804998  237717 kapi.go:59] client config for pause-295501: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/client.crt", KeyFile:"/home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/client.key", CAFile:"/home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1129 09:11:40.805426  237717 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1129 09:11:40.805442  237717 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1129 09:11:40.805447  237717 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1129 09:11:40.805451  237717 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1129 09:11:40.805454  237717 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1129 09:11:40.805739  237717 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:11:40.814523  237717 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1129 09:11:40.814555  237717 kubeadm.go:602] duration metric: took 19.989735ms to restartPrimaryControlPlane
	I1129 09:11:40.814565  237717 kubeadm.go:403] duration metric: took 75.263822ms to StartCluster
	I1129 09:11:40.814579  237717 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:11:40.814656  237717 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:11:40.816348  237717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:11:40.816652  237717 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:11:40.816763  237717 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:11:40.816919  237717 config.go:182] Loaded profile config "pause-295501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:11:40.820757  237717 out.go:179] * Verifying Kubernetes components...
	I1129 09:11:40.820757  237717 out.go:179] * Enabled addons: 
	I1129 09:11:36.988515  219843 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:11:36.989050  219843 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1129 09:11:36.989115  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:36.989180  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:37.030202  219843 cri.go:89] found id: "c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83"
	I1129 09:11:37.030229  219843 cri.go:89] found id: ""
	I1129 09:11:37.030240  219843 logs.go:282] 1 containers: [c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83]
	I1129 09:11:37.030309  219843 ssh_runner.go:195] Run: which crictl
	I1129 09:11:37.034740  219843 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:37.034808  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:37.077093  219843 cri.go:89] found id: ""
	I1129 09:11:37.077126  219843 logs.go:282] 0 containers: []
	W1129 09:11:37.077137  219843 logs.go:284] No container was found matching "etcd"
	I1129 09:11:37.077146  219843 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:37.077214  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:37.120049  219843 cri.go:89] found id: ""
	I1129 09:11:37.120080  219843 logs.go:282] 0 containers: []
	W1129 09:11:37.120091  219843 logs.go:284] No container was found matching "coredns"
	I1129 09:11:37.120099  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:37.120169  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:37.159793  219843 cri.go:89] found id: "1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9"
	I1129 09:11:37.159819  219843 cri.go:89] found id: ""
	I1129 09:11:37.159830  219843 logs.go:282] 1 containers: [1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9]
	I1129 09:11:37.159912  219843 ssh_runner.go:195] Run: which crictl
	I1129 09:11:37.164147  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:37.164229  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:37.203261  219843 cri.go:89] found id: ""
	I1129 09:11:37.203293  219843 logs.go:282] 0 containers: []
	W1129 09:11:37.203317  219843 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:11:37.203326  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:37.203389  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:37.245966  219843 cri.go:89] found id: "e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116"
	I1129 09:11:37.245991  219843 cri.go:89] found id: ""
	I1129 09:11:37.246002  219843 logs.go:282] 1 containers: [e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116]
	I1129 09:11:37.246077  219843 ssh_runner.go:195] Run: which crictl
	I1129 09:11:37.250685  219843 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:37.250767  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:37.294903  219843 cri.go:89] found id: ""
	I1129 09:11:37.294930  219843 logs.go:282] 0 containers: []
	W1129 09:11:37.294938  219843 logs.go:284] No container was found matching "kindnet"
	I1129 09:11:37.294944  219843 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:37.295001  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:37.335264  219843 cri.go:89] found id: ""
	I1129 09:11:37.335296  219843 logs.go:282] 0 containers: []
	W1129 09:11:37.335311  219843 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:11:37.335323  219843 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:37.335339  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:37.411900  219843 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:37.411923  219843 logs.go:123] Gathering logs for kube-apiserver [c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83] ...
	I1129 09:11:37.411937  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83"
	I1129 09:11:37.455613  219843 logs.go:123] Gathering logs for kube-scheduler [1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9] ...
	I1129 09:11:37.455644  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9"
	I1129 09:11:37.531181  219843 logs.go:123] Gathering logs for kube-controller-manager [e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116] ...
	I1129 09:11:37.531224  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116"
	I1129 09:11:37.569892  219843 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:37.569927  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:37.616758  219843 logs.go:123] Gathering logs for container status ...
	I1129 09:11:37.616793  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:37.659511  219843 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:37.659538  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:37.749494  219843 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:37.749535  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:40.268097  219843 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:11:40.268604  219843 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1129 09:11:40.268671  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:40.268733  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:40.313690  219843 cri.go:89] found id: "c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83"
	I1129 09:11:40.313717  219843 cri.go:89] found id: ""
	I1129 09:11:40.313728  219843 logs.go:282] 1 containers: [c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83]
	I1129 09:11:40.313792  219843 ssh_runner.go:195] Run: which crictl
	I1129 09:11:40.318223  219843 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:40.318292  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:40.363176  219843 cri.go:89] found id: ""
	I1129 09:11:40.363203  219843 logs.go:282] 0 containers: []
	W1129 09:11:40.363214  219843 logs.go:284] No container was found matching "etcd"
	I1129 09:11:40.363221  219843 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:40.363278  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:40.406516  219843 cri.go:89] found id: ""
	I1129 09:11:40.406544  219843 logs.go:282] 0 containers: []
	W1129 09:11:40.406578  219843 logs.go:284] No container was found matching "coredns"
	I1129 09:11:40.406591  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:40.406652  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:40.448896  219843 cri.go:89] found id: "1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9"
	I1129 09:11:40.448922  219843 cri.go:89] found id: ""
	I1129 09:11:40.448932  219843 logs.go:282] 1 containers: [1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9]
	I1129 09:11:40.448995  219843 ssh_runner.go:195] Run: which crictl
	I1129 09:11:40.454514  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:40.454607  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:40.497404  219843 cri.go:89] found id: ""
	I1129 09:11:40.497432  219843 logs.go:282] 0 containers: []
	W1129 09:11:40.497445  219843 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:11:40.497453  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:40.497515  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:40.543655  219843 cri.go:89] found id: "e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116"
	I1129 09:11:40.543678  219843 cri.go:89] found id: ""
	I1129 09:11:40.543687  219843 logs.go:282] 1 containers: [e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116]
	I1129 09:11:40.543749  219843 ssh_runner.go:195] Run: which crictl
	I1129 09:11:40.548165  219843 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:40.548242  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:40.590718  219843 cri.go:89] found id: ""
	I1129 09:11:40.590744  219843 logs.go:282] 0 containers: []
	W1129 09:11:40.590755  219843 logs.go:284] No container was found matching "kindnet"
	I1129 09:11:40.590763  219843 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:40.590824  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:40.633221  219843 cri.go:89] found id: ""
	I1129 09:11:40.633248  219843 logs.go:282] 0 containers: []
	W1129 09:11:40.633259  219843 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:11:40.633274  219843 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:40.633290  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:40.701716  219843 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:40.701736  219843 logs.go:123] Gathering logs for kube-apiserver [c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83] ...
	I1129 09:11:40.701749  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83"
	I1129 09:11:40.745170  219843 logs.go:123] Gathering logs for kube-scheduler [1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9] ...
	I1129 09:11:40.745202  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9"
	I1129 09:11:40.819819  219843 logs.go:123] Gathering logs for kube-controller-manager [e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116] ...
	I1129 09:11:40.819865  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116"
	I1129 09:11:40.865441  219843 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:40.865480  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:40.919051  219843 logs.go:123] Gathering logs for container status ...
	I1129 09:11:40.919115  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:40.960875  219843 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:40.960907  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:41.058946  219843 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:41.058993  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:40.821826  237717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:11:40.821828  237717 addons.go:530] duration metric: took 5.075699ms for enable addons: enabled=[]
	I1129 09:11:40.945713  237717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:11:40.961579  237717 node_ready.go:35] waiting up to 6m0s for node "pause-295501" to be "Ready" ...
	I1129 09:11:40.969379  237717 node_ready.go:49] node "pause-295501" is "Ready"
	I1129 09:11:40.969413  237717 node_ready.go:38] duration metric: took 7.804382ms for node "pause-295501" to be "Ready" ...
	I1129 09:11:40.969428  237717 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:11:40.969484  237717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:11:40.983937  237717 api_server.go:72] duration metric: took 167.245161ms to wait for apiserver process to appear ...
	I1129 09:11:40.983968  237717 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:11:40.983991  237717 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:11:40.989114  237717 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1129 09:11:40.990011  237717 api_server.go:141] control plane version: v1.34.1
	I1129 09:11:40.990039  237717 api_server.go:131] duration metric: took 6.06432ms to wait for apiserver health ...
	I1129 09:11:40.990049  237717 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:11:40.993045  237717 system_pods.go:59] 7 kube-system pods found
	I1129 09:11:40.993072  237717 system_pods.go:61] "coredns-66bc5c9577-zwrqh" [22030a4d-3d80-4ff5-b4ff-245caa1db156] Running
	I1129 09:11:40.993078  237717 system_pods.go:61] "etcd-pause-295501" [8a11dab1-cbef-4fd6-b149-0c9e2408f284] Running
	I1129 09:11:40.993082  237717 system_pods.go:61] "kindnet-st2fs" [a6934cc7-9fdf-4551-bfd6-b001f95eb4f2] Running
	I1129 09:11:40.993085  237717 system_pods.go:61] "kube-apiserver-pause-295501" [9c390dfe-6ae0-484b-99ad-306b1178b990] Running
	I1129 09:11:40.993088  237717 system_pods.go:61] "kube-controller-manager-pause-295501" [6155afce-9251-4e9f-acf5-8e9d6099f2a6] Running
	I1129 09:11:40.993091  237717 system_pods.go:61] "kube-proxy-f4kr8" [049b4663-ac1a-4dfc-9ab7-7060baa838e6] Running
	I1129 09:11:40.993094  237717 system_pods.go:61] "kube-scheduler-pause-295501" [c1e85599-cb4d-4518-be61-1d19147cd2e6] Running
	I1129 09:11:40.993100  237717 system_pods.go:74] duration metric: took 3.04543ms to wait for pod list to return data ...
	I1129 09:11:40.993107  237717 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:11:40.995365  237717 default_sa.go:45] found service account: "default"
	I1129 09:11:40.995398  237717 default_sa.go:55] duration metric: took 2.284204ms for default service account to be created ...
	I1129 09:11:40.995408  237717 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:11:40.998437  237717 system_pods.go:86] 7 kube-system pods found
	I1129 09:11:40.998469  237717 system_pods.go:89] "coredns-66bc5c9577-zwrqh" [22030a4d-3d80-4ff5-b4ff-245caa1db156] Running
	I1129 09:11:40.998479  237717 system_pods.go:89] "etcd-pause-295501" [8a11dab1-cbef-4fd6-b149-0c9e2408f284] Running
	I1129 09:11:40.998485  237717 system_pods.go:89] "kindnet-st2fs" [a6934cc7-9fdf-4551-bfd6-b001f95eb4f2] Running
	I1129 09:11:40.998490  237717 system_pods.go:89] "kube-apiserver-pause-295501" [9c390dfe-6ae0-484b-99ad-306b1178b990] Running
	I1129 09:11:40.998495  237717 system_pods.go:89] "kube-controller-manager-pause-295501" [6155afce-9251-4e9f-acf5-8e9d6099f2a6] Running
	I1129 09:11:40.998500  237717 system_pods.go:89] "kube-proxy-f4kr8" [049b4663-ac1a-4dfc-9ab7-7060baa838e6] Running
	I1129 09:11:40.998505  237717 system_pods.go:89] "kube-scheduler-pause-295501" [c1e85599-cb4d-4518-be61-1d19147cd2e6] Running
	I1129 09:11:40.998520  237717 system_pods.go:126] duration metric: took 3.105692ms to wait for k8s-apps to be running ...
	I1129 09:11:40.998532  237717 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:11:40.998586  237717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:11:41.012566  237717 system_svc.go:56] duration metric: took 14.026429ms WaitForService to wait for kubelet
	I1129 09:11:41.012593  237717 kubeadm.go:587] duration metric: took 195.907676ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:11:41.012609  237717 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:11:41.015321  237717 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:11:41.015368  237717 node_conditions.go:123] node cpu capacity is 8
	I1129 09:11:41.015390  237717 node_conditions.go:105] duration metric: took 2.773198ms to run NodePressure ...
	I1129 09:11:41.015404  237717 start.go:242] waiting for startup goroutines ...
	I1129 09:11:41.015413  237717 start.go:247] waiting for cluster config update ...
	I1129 09:11:41.015423  237717 start.go:256] writing updated cluster config ...
	I1129 09:11:41.015771  237717 ssh_runner.go:195] Run: rm -f paused
	I1129 09:11:41.020301  237717 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:11:41.021304  237717 kapi.go:59] client config for pause-295501: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/client.crt", KeyFile:"/home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/client.key", CAFile:"/home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1129 09:11:41.024522  237717 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zwrqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:41.029342  237717 pod_ready.go:94] pod "coredns-66bc5c9577-zwrqh" is "Ready"
	I1129 09:11:41.029380  237717 pod_ready.go:86] duration metric: took 4.832112ms for pod "coredns-66bc5c9577-zwrqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:41.031624  237717 pod_ready.go:83] waiting for pod "etcd-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:41.036104  237717 pod_ready.go:94] pod "etcd-pause-295501" is "Ready"
	I1129 09:11:41.036132  237717 pod_ready.go:86] duration metric: took 4.479394ms for pod "etcd-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:41.038343  237717 pod_ready.go:83] waiting for pod "kube-apiserver-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:41.042711  237717 pod_ready.go:94] pod "kube-apiserver-pause-295501" is "Ready"
	I1129 09:11:41.042736  237717 pod_ready.go:86] duration metric: took 4.370525ms for pod "kube-apiserver-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:41.044936  237717 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:41.424440  237717 pod_ready.go:94] pod "kube-controller-manager-pause-295501" is "Ready"
	I1129 09:11:41.424472  237717 pod_ready.go:86] duration metric: took 379.512342ms for pod "kube-controller-manager-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:37.038118  218317 cri.go:89] found id: ""
	I1129 09:11:37.038146  218317 logs.go:282] 0 containers: []
	W1129 09:11:37.038157  218317 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:11:37.038170  218317 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:37.038186  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:37.132553  218317 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:37.132587  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:37.148650  218317 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:37.148677  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:37.213617  218317 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:37.213641  218317 logs.go:123] Gathering logs for kube-apiserver [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8] ...
	I1129 09:11:37.213657  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:37.252650  218317 logs.go:123] Gathering logs for kube-scheduler [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99] ...
	I1129 09:11:37.252682  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:37.310092  218317 logs.go:123] Gathering logs for kube-controller-manager [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336] ...
	I1129 09:11:37.310139  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:37.341690  218317 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:37.341721  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:37.407424  218317 logs.go:123] Gathering logs for container status ...
	I1129 09:11:37.407477  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:39.943945  218317 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1129 09:11:39.944425  218317 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1129 09:11:39.944472  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:39.944523  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:39.978366  218317 cri.go:89] found id: "ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:39.978395  218317 cri.go:89] found id: ""
	I1129 09:11:39.978406  218317 logs.go:282] 1 containers: [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8]
	I1129 09:11:39.978469  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.982571  218317 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:39.982646  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:40.014565  218317 cri.go:89] found id: ""
	I1129 09:11:40.014595  218317 logs.go:282] 0 containers: []
	W1129 09:11:40.014605  218317 logs.go:284] No container was found matching "etcd"
	I1129 09:11:40.014613  218317 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:40.014679  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:40.045211  218317 cri.go:89] found id: ""
	I1129 09:11:40.045241  218317 logs.go:282] 0 containers: []
	W1129 09:11:40.045253  218317 logs.go:284] No container was found matching "coredns"
	I1129 09:11:40.045261  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:40.045325  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:40.077112  218317 cri.go:89] found id: "d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:40.077140  218317 cri.go:89] found id: ""
	I1129 09:11:40.077151  218317 logs.go:282] 1 containers: [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99]
	I1129 09:11:40.077216  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:40.081894  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:40.081969  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:40.113487  218317 cri.go:89] found id: ""
	I1129 09:11:40.113511  218317 logs.go:282] 0 containers: []
	W1129 09:11:40.113521  218317 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:11:40.113529  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:40.113588  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:40.143488  218317 cri.go:89] found id: "ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:40.143513  218317 cri.go:89] found id: ""
	I1129 09:11:40.143522  218317 logs.go:282] 1 containers: [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336]
	I1129 09:11:40.143573  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:40.148170  218317 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:40.148249  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:40.179901  218317 cri.go:89] found id: ""
	I1129 09:11:40.179928  218317 logs.go:282] 0 containers: []
	W1129 09:11:40.179938  218317 logs.go:284] No container was found matching "kindnet"
	I1129 09:11:40.179946  218317 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:40.180004  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:40.210247  218317 cri.go:89] found id: ""
	I1129 09:11:40.210270  218317 logs.go:282] 0 containers: []
	W1129 09:11:40.210277  218317 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:11:40.210286  218317 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:40.210298  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:40.226470  218317 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:40.226505  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:40.292217  218317 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:40.292243  218317 logs.go:123] Gathering logs for kube-apiserver [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8] ...
	I1129 09:11:40.292258  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:40.332431  218317 logs.go:123] Gathering logs for kube-scheduler [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99] ...
	I1129 09:11:40.332464  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:40.400115  218317 logs.go:123] Gathering logs for kube-controller-manager [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336] ...
	I1129 09:11:40.400153  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:40.434697  218317 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:40.434724  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:40.502709  218317 logs.go:123] Gathering logs for container status ...
	I1129 09:11:40.502750  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:40.544897  218317 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:40.544929  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:41.624451  237717 pod_ready.go:83] waiting for pod "kube-proxy-f4kr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:42.024439  237717 pod_ready.go:94] pod "kube-proxy-f4kr8" is "Ready"
	I1129 09:11:42.024466  237717 pod_ready.go:86] duration metric: took 399.988573ms for pod "kube-proxy-f4kr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:42.224669  237717 pod_ready.go:83] waiting for pod "kube-scheduler-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:42.625036  237717 pod_ready.go:94] pod "kube-scheduler-pause-295501" is "Ready"
	I1129 09:11:42.625065  237717 pod_ready.go:86] duration metric: took 400.370045ms for pod "kube-scheduler-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:42.625078  237717 pod_ready.go:40] duration metric: took 1.604733434s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:11:42.670228  237717 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:11:42.671921  237717 out.go:179] * Done! kubectl is now configured to use "pause-295501" cluster and "default" namespace by default
	I1129 09:11:39.455759  214471 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1129 09:11:39.456221  214471 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1129 09:11:39.456269  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:39.456319  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:39.493639  214471 cri.go:89] found id: "1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d"
	I1129 09:11:39.493666  214471 cri.go:89] found id: ""
	I1129 09:11:39.493677  214471 logs.go:282] 1 containers: [1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d]
	I1129 09:11:39.493742  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.497836  214471 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:39.497921  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:39.535560  214471 cri.go:89] found id: "6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0"
	I1129 09:11:39.535582  214471 cri.go:89] found id: ""
	I1129 09:11:39.535603  214471 logs.go:282] 1 containers: [6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0]
	I1129 09:11:39.535662  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.539813  214471 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:39.539911  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:39.580314  214471 cri.go:89] found id: ""
	I1129 09:11:39.580337  214471 logs.go:282] 0 containers: []
	W1129 09:11:39.580355  214471 logs.go:284] No container was found matching "coredns"
	I1129 09:11:39.580360  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:39.580480  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:39.620199  214471 cri.go:89] found id: "fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad"
	I1129 09:11:39.620223  214471 cri.go:89] found id: "fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7"
	I1129 09:11:39.620230  214471 cri.go:89] found id: ""
	I1129 09:11:39.620240  214471 logs.go:282] 2 containers: [fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7]
	I1129 09:11:39.620292  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.624913  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.628828  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:39.628912  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:39.666984  214471 cri.go:89] found id: "31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3"
	I1129 09:11:39.667009  214471 cri.go:89] found id: "9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3"
	I1129 09:11:39.667015  214471 cri.go:89] found id: ""
	I1129 09:11:39.667028  214471 logs.go:282] 2 containers: [31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3 9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3]
	I1129 09:11:39.667087  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.671862  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.676989  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:39.677069  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:39.717467  214471 cri.go:89] found id: "33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322"
	I1129 09:11:39.717486  214471 cri.go:89] found id: ""
	I1129 09:11:39.717499  214471 logs.go:282] 1 containers: [33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322]
	I1129 09:11:39.717549  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.721428  214471 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:39.721505  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:39.762828  214471 cri.go:89] found id: "fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19"
	I1129 09:11:39.762873  214471 cri.go:89] found id: "ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194"
	I1129 09:11:39.762879  214471 cri.go:89] found id: ""
	I1129 09:11:39.762889  214471 logs.go:282] 2 containers: [fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19 ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194]
	I1129 09:11:39.762951  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.767631  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.771612  214471 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:39.771670  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:39.811684  214471 cri.go:89] found id: "d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b"
	I1129 09:11:39.811711  214471 cri.go:89] found id: ""
	I1129 09:11:39.811721  214471 logs.go:282] 1 containers: [d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b]
	I1129 09:11:39.811783  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.815823  214471 logs.go:123] Gathering logs for container status ...
	I1129 09:11:39.815865  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:39.857558  214471 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:39.857588  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:39.882667  214471 logs.go:123] Gathering logs for kube-apiserver [1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d] ...
	I1129 09:11:39.882716  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d"
	I1129 09:11:39.923530  214471 logs.go:123] Gathering logs for kube-proxy [9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3] ...
	I1129 09:11:39.923567  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3"
	I1129 09:11:39.964583  214471 logs.go:123] Gathering logs for kindnet [fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19] ...
	I1129 09:11:39.964621  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19"
	I1129 09:11:40.020586  214471 logs.go:123] Gathering logs for storage-provisioner [d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b] ...
	I1129 09:11:40.020623  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b"
	I1129 09:11:40.064811  214471 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:40.064859  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:40.167022  214471 logs.go:123] Gathering logs for kube-scheduler [fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7] ...
	I1129 09:11:40.167058  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7"
	I1129 09:11:40.221232  214471 logs.go:123] Gathering logs for kube-proxy [31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3] ...
	I1129 09:11:40.221274  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3"
	I1129 09:11:40.271434  214471 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:40.271464  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:40.350824  214471 logs.go:123] Gathering logs for kube-scheduler [fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad] ...
	I1129 09:11:40.350875  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad"
	I1129 09:11:40.438732  214471 logs.go:123] Gathering logs for kube-controller-manager [33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322] ...
	I1129 09:11:40.438765  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322"
	I1129 09:11:40.486213  214471 logs.go:123] Gathering logs for kindnet [ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194] ...
	I1129 09:11:40.486244  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194"
	I1129 09:11:40.532247  214471 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:40.532274  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:40.608168  214471 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:40.608193  214471 logs.go:123] Gathering logs for etcd [6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0] ...
	I1129 09:11:40.608212  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0"
	I1129 09:11:43.168886  214471 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1129 09:11:43.169492  214471 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1129 09:11:43.169561  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:43.169627  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:43.213215  214471 cri.go:89] found id: "1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d"
	I1129 09:11:43.213242  214471 cri.go:89] found id: ""
	I1129 09:11:43.213252  214471 logs.go:282] 1 containers: [1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d]
	I1129 09:11:43.213316  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.218373  214471 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:43.218457  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:43.257223  214471 cri.go:89] found id: "6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0"
	I1129 09:11:43.257245  214471 cri.go:89] found id: ""
	I1129 09:11:43.257253  214471 logs.go:282] 1 containers: [6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0]
	I1129 09:11:43.257308  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.261910  214471 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:43.261990  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:43.302140  214471 cri.go:89] found id: ""
	I1129 09:11:43.302172  214471 logs.go:282] 0 containers: []
	W1129 09:11:43.302194  214471 logs.go:284] No container was found matching "coredns"
	I1129 09:11:43.302203  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:43.302267  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:43.339478  214471 cri.go:89] found id: "fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad"
	I1129 09:11:43.339497  214471 cri.go:89] found id: "fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7"
	I1129 09:11:43.339501  214471 cri.go:89] found id: ""
	I1129 09:11:43.339508  214471 logs.go:282] 2 containers: [fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7]
	I1129 09:11:43.339549  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.343868  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.347769  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:43.347851  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:43.385385  214471 cri.go:89] found id: "31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3"
	I1129 09:11:43.385412  214471 cri.go:89] found id: "9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3"
	I1129 09:11:43.385418  214471 cri.go:89] found id: ""
	I1129 09:11:43.385428  214471 logs.go:282] 2 containers: [31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3 9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3]
	I1129 09:11:43.385480  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.389543  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.394281  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:43.394352  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:43.432680  214471 cri.go:89] found id: "33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322"
	I1129 09:11:43.432707  214471 cri.go:89] found id: ""
	I1129 09:11:43.432717  214471 logs.go:282] 1 containers: [33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322]
	I1129 09:11:43.432778  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.437328  214471 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:43.437406  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:43.477920  214471 cri.go:89] found id: "fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19"
	I1129 09:11:43.477949  214471 cri.go:89] found id: "ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194"
	I1129 09:11:43.477955  214471 cri.go:89] found id: ""
	I1129 09:11:43.477964  214471 logs.go:282] 2 containers: [fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19 ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194]
	I1129 09:11:43.478019  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.482024  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.486067  214471 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:43.486140  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:43.534548  214471 cri.go:89] found id: "d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b"
	I1129 09:11:43.534576  214471 cri.go:89] found id: ""
	I1129 09:11:43.534587  214471 logs.go:282] 1 containers: [d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b]
	I1129 09:11:43.534644  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.539198  214471 logs.go:123] Gathering logs for storage-provisioner [d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b] ...
	I1129 09:11:43.539223  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b"
	I1129 09:11:43.578682  214471 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:43.578711  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:43.654353  214471 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:43.654388  214471 logs.go:123] Gathering logs for kindnet [ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194] ...
	I1129 09:11:43.654403  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194"
	I1129 09:11:43.697226  214471 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:43.697315  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:43.807952  214471 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:43.808004  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> CRI-O <==
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.414916307Z" level=info msg="RDT not available in the host system"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.414930456Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.415716309Z" level=info msg="Conmon does support the --sync option"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.415734446Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.415753406Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.416460046Z" level=info msg="Conmon does support the --sync option"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.416476237Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.420299293Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.420329387Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.420926775Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.421386521Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.421446577Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513021051Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-zwrqh Namespace:kube-system ID:dbb953d5d78dfcbb4e8f84b62bc77e052694698c676219eb37c5d3fc5ffb01ab UID:22030a4d-3d80-4ff5-b4ff-245caa1db156 NetNS:/var/run/netns/80db6f7a-f620-4e62-8ad2-687a6af1ae53 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005aa370}] Aliases:map[]}"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513211767Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-zwrqh for CNI network kindnet (type=ptp)"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513659283Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513685017Z" level=info msg="Starting seccomp notifier watcher"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513743154Z" level=info msg="Create NRI interface"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.51387465Z" level=info msg="built-in NRI default validator is disabled"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513891852Z" level=info msg="runtime interface created"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513902817Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513908231Z" level=info msg="runtime interface starting up..."
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513912971Z" level=info msg="starting plugins..."
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513924557Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.514297807Z" level=info msg="No systemd watchdog enabled"
	Nov 29 09:11:39 pause-295501 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a8bbbdfd51dab       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   0                   dbb953d5d78df       coredns-66bc5c9577-zwrqh               kube-system
	f78ddfecb898a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   3dce700df6f43       kindnet-st2fs                          kube-system
	092bad9397a64       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   23 seconds ago      Running             kube-proxy                0                   306f18b9bacfd       kube-proxy-f4kr8                       kube-system
	a2578393c04ae       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   33 seconds ago      Running             kube-scheduler            0                   ae847dd9e384d       kube-scheduler-pause-295501            kube-system
	a1cfbd7b390ca       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   33 seconds ago      Running             kube-apiserver            0                   985a97e3945e5       kube-apiserver-pause-295501            kube-system
	45e32d34386d7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   33 seconds ago      Running             etcd                      0                   a3aaf265697d7       etcd-pause-295501                      kube-system
	249d465154ffd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   33 seconds ago      Running             kube-controller-manager   0                   9efa9ed9d0067       kube-controller-manager-pause-295501   kube-system
	
	
	==> coredns [a8bbbdfd51dab82a8d563b57b00d7ca2194f16c69e6c25030b49a452fa24c721] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39627 - 35765 "HINFO IN 3165023682602823029.1093676017627200724. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042657653s
	
	
	==> describe nodes <==
	Name:               pause-295501
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-295501
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=pause-295501
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_11_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:11:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-295501
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:11:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:11:33 +0000   Sat, 29 Nov 2025 09:11:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:11:33 +0000   Sat, 29 Nov 2025 09:11:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:11:33 +0000   Sat, 29 Nov 2025 09:11:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:11:33 +0000   Sat, 29 Nov 2025 09:11:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-295501
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                1fe31677-beb1-4298-8b5b-7e258707552a
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-zwrqh                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-pause-295501                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-st2fs                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-pause-295501             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-pause-295501    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-f4kr8                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-pause-295501             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node pause-295501 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node pause-295501 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node pause-295501 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node pause-295501 event: Registered Node pause-295501 in Controller
	  Normal  NodeReady                12s   kubelet          Node pause-295501 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.088968] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025527] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.969002] kauditd_printk_skb: 47 callbacks suppressed
	[Nov29 08:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.030577] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +2.047756] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +4.031543] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[Nov29 08:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[ +16.382281] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[ +32.252561] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	
	
	==> etcd [45e32d34386d74e83368055caa5eb9f063ed6013f8c4cd7c8a1fbf290b1d66ef] <==
	{"level":"warn","ts":"2025-11-29T09:11:13.460918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.470871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.478738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.485939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.493236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.500679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.508773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.515196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.522906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.533059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.540479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.547470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.554453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.561695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.571058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.579927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.587019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.593776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.600732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.608537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.614773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.640038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.646894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.653594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.707570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49688","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:11:45 up 54 min,  0 user,  load average: 1.54, 2.15, 1.51
	Linux pause-295501 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f78ddfecb898a10860947cd0138c0c91e432eb8133b9c1199e2378046faeefb6] <==
	I1129 09:11:22.862107       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:11:22.862431       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 09:11:22.862607       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:11:22.862631       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:11:22.862657       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:11:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:11:23.062420       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:11:23.094780       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:11:23.094867       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:11:23.095052       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:11:23.395030       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:11:23.395071       1 metrics.go:72] Registering metrics
	I1129 09:11:23.395194       1 controller.go:711] "Syncing nftables rules"
	I1129 09:11:33.062980       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:11:33.063045       1 main.go:301] handling current node
	I1129 09:11:43.067453       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:11:43.067503       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a1cfbd7b390ca764419e13459208f92ccab83b9f560dab98155c947b575c9eb7] <==
	I1129 09:11:14.224778       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1129 09:11:14.224816       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1129 09:11:14.228615       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:11:14.228640       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:11:14.232248       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:11:14.232500       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:11:14.236010       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:11:14.237701       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1129 09:11:15.099415       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:11:15.103132       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:11:15.103161       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:11:15.585061       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:11:15.625995       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:11:15.701551       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:11:15.707877       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1129 09:11:15.708946       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:11:15.713320       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:11:16.110453       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:11:16.733752       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:11:16.743886       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:11:16.752898       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:11:21.766268       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:11:21.770637       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:11:22.062269       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:11:22.213294       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [249d465154ffd1c8223cffce99d25f392334b77035bb4fd7a68ea732b0d1ffaa] <==
	I1129 09:11:21.102198       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:11:21.102217       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 09:11:21.105710       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:11:21.108124       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:11:21.108150       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:11:21.108161       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 09:11:21.108662       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 09:11:21.109718       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 09:11:21.109746       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 09:11:21.109770       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:11:21.109881       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:11:21.109985       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:11:21.110006       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:11:21.110047       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 09:11:21.110073       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:11:21.110084       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:11:21.110589       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 09:11:21.110640       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:11:21.110929       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 09:11:21.112527       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:11:21.113362       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 09:11:21.118965       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:11:21.132197       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:11:21.143557       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 09:11:36.062348       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [092bad9397a64f5518143503ccfeb6661abcd1fb66cf16f31703c648078497fe] <==
	I1129 09:11:22.642815       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:11:22.698111       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:11:22.798759       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:11:22.798803       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1129 09:11:22.798980       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:11:22.818279       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:11:22.818351       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:11:22.824075       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:11:22.824489       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:11:22.824516       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:11:22.826980       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:11:22.827063       1 config.go:200] "Starting service config controller"
	I1129 09:11:22.827087       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:11:22.827073       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:11:22.827166       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:11:22.827187       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:11:22.827261       1 config.go:309] "Starting node config controller"
	I1129 09:11:22.827296       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:11:22.827303       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:11:22.927684       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:11:22.927728       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:11:22.927750       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a2578393c04aeba954d20c8ac71275220f67953d2fd43b6e378182a2c47660e2] <==
	I1129 09:11:14.767752       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:11:14.769540       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:11:14.769584       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:11:14.769804       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:11:14.769869       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1129 09:11:14.772677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1129 09:11:14.772776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:11:14.773164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:11:14.773191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:11:14.773225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:11:14.773286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:11:14.773390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:11:14.773393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:11:14.773817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:11:14.774242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:11:14.774289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:11:14.774390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:11:14.774437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:11:14.774561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:11:14.774715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:11:14.774975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:11:14.774974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:11:14.775086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:11:14.775194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1129 09:11:16.070110       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:11:17 pause-295501 kubelet[1297]: E1129 09:11:17.625056    1297 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-295501\" already exists" pod="kube-system/kube-controller-manager-pause-295501"
	Nov 29 09:11:17 pause-295501 kubelet[1297]: I1129 09:11:17.637962    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-295501" podStartSLOduration=1.6379409329999999 podStartE2EDuration="1.637940933s" podCreationTimestamp="2025-11-29 09:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:11:17.636516565 +0000 UTC m=+1.133499447" watchObservedRunningTime="2025-11-29 09:11:17.637940933 +0000 UTC m=+1.134923811"
	Nov 29 09:11:17 pause-295501 kubelet[1297]: I1129 09:11:17.661325    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-295501" podStartSLOduration=1.661301759 podStartE2EDuration="1.661301759s" podCreationTimestamp="2025-11-29 09:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:11:17.648189449 +0000 UTC m=+1.145172332" watchObservedRunningTime="2025-11-29 09:11:17.661301759 +0000 UTC m=+1.158284642"
	Nov 29 09:11:17 pause-295501 kubelet[1297]: I1129 09:11:17.671522    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-295501" podStartSLOduration=1.6715015050000002 podStartE2EDuration="1.671501505s" podCreationTimestamp="2025-11-29 09:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:11:17.661299922 +0000 UTC m=+1.158282805" watchObservedRunningTime="2025-11-29 09:11:17.671501505 +0000 UTC m=+1.168484389"
	Nov 29 09:11:17 pause-295501 kubelet[1297]: I1129 09:11:17.682126    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-295501" podStartSLOduration=1.682101937 podStartE2EDuration="1.682101937s" podCreationTimestamp="2025-11-29 09:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:11:17.671666677 +0000 UTC m=+1.168649559" watchObservedRunningTime="2025-11-29 09:11:17.682101937 +0000 UTC m=+1.179084814"
	Nov 29 09:11:21 pause-295501 kubelet[1297]: I1129 09:11:21.125441    1297 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:11:21 pause-295501 kubelet[1297]: I1129 09:11:21.126253    1297 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.322684    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a6934cc7-9fdf-4551-bfd6-b001f95eb4f2-cni-cfg\") pod \"kindnet-st2fs\" (UID: \"a6934cc7-9fdf-4551-bfd6-b001f95eb4f2\") " pod="kube-system/kindnet-st2fs"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.322766    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n747c\" (UniqueName: \"kubernetes.io/projected/049b4663-ac1a-4dfc-9ab7-7060baa838e6-kube-api-access-n747c\") pod \"kube-proxy-f4kr8\" (UID: \"049b4663-ac1a-4dfc-9ab7-7060baa838e6\") " pod="kube-system/kube-proxy-f4kr8"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.322793    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6934cc7-9fdf-4551-bfd6-b001f95eb4f2-lib-modules\") pod \"kindnet-st2fs\" (UID: \"a6934cc7-9fdf-4551-bfd6-b001f95eb4f2\") " pod="kube-system/kindnet-st2fs"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.322820    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z49g8\" (UniqueName: \"kubernetes.io/projected/a6934cc7-9fdf-4551-bfd6-b001f95eb4f2-kube-api-access-z49g8\") pod \"kindnet-st2fs\" (UID: \"a6934cc7-9fdf-4551-bfd6-b001f95eb4f2\") " pod="kube-system/kindnet-st2fs"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.322874    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/049b4663-ac1a-4dfc-9ab7-7060baa838e6-kube-proxy\") pod \"kube-proxy-f4kr8\" (UID: \"049b4663-ac1a-4dfc-9ab7-7060baa838e6\") " pod="kube-system/kube-proxy-f4kr8"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.322898    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/049b4663-ac1a-4dfc-9ab7-7060baa838e6-xtables-lock\") pod \"kube-proxy-f4kr8\" (UID: \"049b4663-ac1a-4dfc-9ab7-7060baa838e6\") " pod="kube-system/kube-proxy-f4kr8"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.322969    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/049b4663-ac1a-4dfc-9ab7-7060baa838e6-lib-modules\") pod \"kube-proxy-f4kr8\" (UID: \"049b4663-ac1a-4dfc-9ab7-7060baa838e6\") " pod="kube-system/kube-proxy-f4kr8"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.323013    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6934cc7-9fdf-4551-bfd6-b001f95eb4f2-xtables-lock\") pod \"kindnet-st2fs\" (UID: \"a6934cc7-9fdf-4551-bfd6-b001f95eb4f2\") " pod="kube-system/kindnet-st2fs"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.641147    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f4kr8" podStartSLOduration=0.641125525 podStartE2EDuration="641.125525ms" podCreationTimestamp="2025-11-29 09:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:11:22.640554293 +0000 UTC m=+6.137537177" watchObservedRunningTime="2025-11-29 09:11:22.641125525 +0000 UTC m=+6.138108409"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.653296    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-st2fs" podStartSLOduration=0.653274475 podStartE2EDuration="653.274475ms" podCreationTimestamp="2025-11-29 09:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:11:22.652816621 +0000 UTC m=+6.149799506" watchObservedRunningTime="2025-11-29 09:11:22.653274475 +0000 UTC m=+6.150257358"
	Nov 29 09:11:33 pause-295501 kubelet[1297]: I1129 09:11:33.488492    1297 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:11:33 pause-295501 kubelet[1297]: I1129 09:11:33.607819    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22030a4d-3d80-4ff5-b4ff-245caa1db156-config-volume\") pod \"coredns-66bc5c9577-zwrqh\" (UID: \"22030a4d-3d80-4ff5-b4ff-245caa1db156\") " pod="kube-system/coredns-66bc5c9577-zwrqh"
	Nov 29 09:11:33 pause-295501 kubelet[1297]: I1129 09:11:33.607908    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqzbb\" (UniqueName: \"kubernetes.io/projected/22030a4d-3d80-4ff5-b4ff-245caa1db156-kube-api-access-mqzbb\") pod \"coredns-66bc5c9577-zwrqh\" (UID: \"22030a4d-3d80-4ff5-b4ff-245caa1db156\") " pod="kube-system/coredns-66bc5c9577-zwrqh"
	Nov 29 09:11:34 pause-295501 kubelet[1297]: I1129 09:11:34.670210    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zwrqh" podStartSLOduration=12.670178731 podStartE2EDuration="12.670178731s" podCreationTimestamp="2025-11-29 09:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:11:34.669909943 +0000 UTC m=+18.166892846" watchObservedRunningTime="2025-11-29 09:11:34.670178731 +0000 UTC m=+18.167161610"
	Nov 29 09:11:43 pause-295501 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 09:11:43 pause-295501 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 09:11:43 pause-295501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 29 09:11:43 pause-295501 systemd[1]: kubelet.service: Consumed 1.205s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-295501 -n pause-295501
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-295501 -n pause-295501: exit status 2 (343.559978ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-295501 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-295501
helpers_test.go:243: (dbg) docker inspect pause-295501:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "968421c7df9896ef130cafb07722b48c470caea9e9ac404061d61d0ab4c7c407",
	        "Created": "2025-11-29T09:11:00.300950233Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 231486,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:11:00.343851267Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/968421c7df9896ef130cafb07722b48c470caea9e9ac404061d61d0ab4c7c407/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/968421c7df9896ef130cafb07722b48c470caea9e9ac404061d61d0ab4c7c407/hostname",
	        "HostsPath": "/var/lib/docker/containers/968421c7df9896ef130cafb07722b48c470caea9e9ac404061d61d0ab4c7c407/hosts",
	        "LogPath": "/var/lib/docker/containers/968421c7df9896ef130cafb07722b48c470caea9e9ac404061d61d0ab4c7c407/968421c7df9896ef130cafb07722b48c470caea9e9ac404061d61d0ab4c7c407-json.log",
	        "Name": "/pause-295501",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-295501:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-295501",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "968421c7df9896ef130cafb07722b48c470caea9e9ac404061d61d0ab4c7c407",
	                "LowerDir": "/var/lib/docker/overlay2/d540506d9a4ed58bf099fcdf8789b1803b6bbb4ef9fbbce681522acf8ad7cd02-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d540506d9a4ed58bf099fcdf8789b1803b6bbb4ef9fbbce681522acf8ad7cd02/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d540506d9a4ed58bf099fcdf8789b1803b6bbb4ef9fbbce681522acf8ad7cd02/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d540506d9a4ed58bf099fcdf8789b1803b6bbb4ef9fbbce681522acf8ad7cd02/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-295501",
	                "Source": "/var/lib/docker/volumes/pause-295501/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-295501",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-295501",
	                "name.minikube.sigs.k8s.io": "pause-295501",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "321ef45b004e804139b1abab70942530f7f196de1f321290e0e3fbd69ffc8967",
	            "SandboxKey": "/var/run/docker/netns/321ef45b004e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-295501": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "963a9ee72cd35456162a3b76175450bb7d96a82608b4e2b95e3bbc0ccdc222ec",
	                    "EndpointID": "cea62c5f5e64e81855149fe2affc5280d0a44794cbe5ec428fefb5b54c830066",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "d2:81:07:6a:57:66",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-295501",
	                        "968421c7df98"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-295501 -n pause-295501
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-295501 -n pause-295501: exit status 2 (357.050042ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-295501 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-295501 logs -n 25: (1.058888134s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-628644 sudo cat /lib/systemd/system/containerd.service                                                                         │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ ssh     │ -p cilium-628644 sudo cat /etc/containerd/config.toml                                                                                    │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ ssh     │ -p cilium-628644 sudo containerd config dump                                                                                             │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ ssh     │ -p cilium-628644 sudo systemctl status crio --all --full --no-pager                                                                      │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ ssh     │ -p cilium-628644 sudo systemctl cat crio --no-pager                                                                                      │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ ssh     │ -p cilium-628644 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ ssh     │ -p cilium-628644 sudo crio config                                                                                                        │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ delete  │ -p cilium-628644                                                                                                                         │ cilium-628644             │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:08 UTC │
	│ start   │ -p running-upgrade-246907 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-246907    │ jenkins │ v1.35.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:08 UTC │
	│ ssh     │ cert-options-207443 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                              │ cert-options-207443       │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:08 UTC │
	│ ssh     │ -p cert-options-207443 -- sudo cat /etc/kubernetes/admin.conf                                                                            │ cert-options-207443       │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:08 UTC │
	│ delete  │ -p cert-options-207443                                                                                                                   │ cert-options-207443       │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:08 UTC │
	│ start   │ -p kubernetes-upgrade-665137 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-665137 │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:08 UTC │
	│ delete  │ -p force-systemd-env-076374                                                                                                              │ force-systemd-env-076374  │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:08 UTC │
	│ start   │ -p stopped-upgrade-355524 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-355524    │ jenkins │ v1.35.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:09 UTC │
	│ start   │ -p running-upgrade-246907 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-246907    │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-665137                                                                                                             │ kubernetes-upgrade-665137 │ jenkins │ v1.37.0 │ 29 Nov 25 09:08 UTC │ 29 Nov 25 09:09 UTC │
	│ start   │ -p kubernetes-upgrade-665137 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-665137 │ jenkins │ v1.37.0 │ 29 Nov 25 09:09 UTC │                     │
	│ stop    │ stopped-upgrade-355524 stop                                                                                                              │ stopped-upgrade-355524    │ jenkins │ v1.35.0 │ 29 Nov 25 09:09 UTC │ 29 Nov 25 09:09 UTC │
	│ start   │ -p stopped-upgrade-355524 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-355524    │ jenkins │ v1.37.0 │ 29 Nov 25 09:09 UTC │                     │
	│ start   │ -p cert-expiration-836438 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                │ cert-expiration-836438    │ jenkins │ v1.37.0 │ 29 Nov 25 09:10 UTC │ 29 Nov 25 09:10 UTC │
	│ delete  │ -p cert-expiration-836438                                                                                                                │ cert-expiration-836438    │ jenkins │ v1.37.0 │ 29 Nov 25 09:10 UTC │ 29 Nov 25 09:10 UTC │
	│ start   │ -p pause-295501 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-295501              │ jenkins │ v1.37.0 │ 29 Nov 25 09:10 UTC │ 29 Nov 25 09:11 UTC │
	│ start   │ -p pause-295501 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-295501              │ jenkins │ v1.37.0 │ 29 Nov 25 09:11 UTC │ 29 Nov 25 09:11 UTC │
	│ pause   │ -p pause-295501 --alsologtostderr -v=5                                                                                                   │ pause-295501              │ jenkins │ v1.37.0 │ 29 Nov 25 09:11 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:11:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:11:36.610088  237717 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:11:36.610439  237717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:11:36.610452  237717 out.go:374] Setting ErrFile to fd 2...
	I1129 09:11:36.610460  237717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:11:36.610820  237717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:11:36.611460  237717 out.go:368] Setting JSON to false
	I1129 09:11:36.613045  237717 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3249,"bootTime":1764404248,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:11:36.613135  237717 start.go:143] virtualization: kvm guest
	I1129 09:11:36.615093  237717 out.go:179] * [pause-295501] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:11:36.616388  237717 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:11:36.616426  237717 notify.go:221] Checking for updates...
	I1129 09:11:36.618598  237717 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:11:36.619860  237717 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:11:36.621448  237717 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:11:36.622616  237717 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:11:36.623732  237717 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:11:36.625313  237717 config.go:182] Loaded profile config "pause-295501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:11:36.626247  237717 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:11:36.653610  237717 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:11:36.653719  237717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:11:36.719315  237717 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-29 09:11:36.70843075 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:11:36.719424  237717 docker.go:319] overlay module found
	I1129 09:11:36.721192  237717 out.go:179] * Using the docker driver based on existing profile
	I1129 09:11:36.722374  237717 start.go:309] selected driver: docker
	I1129 09:11:36.722392  237717 start.go:927] validating driver "docker" against &{Name:pause-295501 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-295501 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:11:36.722533  237717 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:11:36.722636  237717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:11:36.785791  237717 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-29 09:11:36.773057508 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:11:36.786768  237717 cni.go:84] Creating CNI manager for ""
	I1129 09:11:36.786872  237717 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:11:36.786939  237717 start.go:353] cluster config:
	{Name:pause-295501 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-295501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:11:36.788935  237717 out.go:179] * Starting "pause-295501" primary control-plane node in "pause-295501" cluster
	I1129 09:11:36.789988  237717 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:11:36.791200  237717 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:11:36.792278  237717 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:11:36.792323  237717 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:11:36.792349  237717 cache.go:65] Caching tarball of preloaded images
	I1129 09:11:36.792389  237717 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:11:36.792464  237717 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:11:36.792481  237717 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:11:36.792645  237717 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/config.json ...
	I1129 09:11:36.820389  237717 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:11:36.820409  237717 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:11:36.820426  237717 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:11:36.820460  237717 start.go:360] acquireMachinesLock for pause-295501: {Name:mk1ad36e18b0d7e5b2ef49f75a67ac102a990d08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:11:36.820525  237717 start.go:364] duration metric: took 40.962µs to acquireMachinesLock for "pause-295501"
	I1129 09:11:36.820545  237717 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:11:36.820555  237717 fix.go:54] fixHost starting: 
	I1129 09:11:36.820810  237717 cli_runner.go:164] Run: docker container inspect pause-295501 --format={{.State.Status}}
	I1129 09:11:36.842595  237717 fix.go:112] recreateIfNeeded on pause-295501: state=Running err=<nil>
	W1129 09:11:36.842635  237717 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 09:11:33.611728  218317 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1129 09:11:33.612190  218317 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1129 09:11:33.612251  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:33.612307  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:33.641602  218317 cri.go:89] found id: "ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:33.641629  218317 cri.go:89] found id: ""
	I1129 09:11:33.641640  218317 logs.go:282] 1 containers: [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8]
	I1129 09:11:33.641701  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:33.646003  218317 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:33.646083  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:33.674768  218317 cri.go:89] found id: ""
	I1129 09:11:33.674791  218317 logs.go:282] 0 containers: []
	W1129 09:11:33.674799  218317 logs.go:284] No container was found matching "etcd"
	I1129 09:11:33.674805  218317 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:33.674875  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:33.702117  218317 cri.go:89] found id: ""
	I1129 09:11:33.702142  218317 logs.go:282] 0 containers: []
	W1129 09:11:33.702152  218317 logs.go:284] No container was found matching "coredns"
	I1129 09:11:33.702160  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:33.702222  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:33.731396  218317 cri.go:89] found id: "d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:33.731418  218317 cri.go:89] found id: ""
	I1129 09:11:33.731428  218317 logs.go:282] 1 containers: [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99]
	I1129 09:11:33.731485  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:33.735479  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:33.735541  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:33.768572  218317 cri.go:89] found id: ""
	I1129 09:11:33.768595  218317 logs.go:282] 0 containers: []
	W1129 09:11:33.768602  218317 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:11:33.768609  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:33.768654  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:33.801794  218317 cri.go:89] found id: "ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:33.801818  218317 cri.go:89] found id: ""
	I1129 09:11:33.801828  218317 logs.go:282] 1 containers: [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336]
	I1129 09:11:33.801921  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:33.806224  218317 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:33.806290  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:33.839667  218317 cri.go:89] found id: ""
	I1129 09:11:33.839698  218317 logs.go:282] 0 containers: []
	W1129 09:11:33.839710  218317 logs.go:284] No container was found matching "kindnet"
	I1129 09:11:33.839720  218317 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:33.839782  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:33.873888  218317 cri.go:89] found id: ""
	I1129 09:11:33.873914  218317 logs.go:282] 0 containers: []
	W1129 09:11:33.873923  218317 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:11:33.873931  218317 logs.go:123] Gathering logs for kube-controller-manager [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336] ...
	I1129 09:11:33.873944  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:33.908306  218317 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:33.908335  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:33.971487  218317 logs.go:123] Gathering logs for container status ...
	I1129 09:11:33.971527  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:34.006765  218317 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:34.006798  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:34.101323  218317 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:34.101363  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:34.118485  218317 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:34.118517  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:34.184350  218317 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:34.184374  218317 logs.go:123] Gathering logs for kube-apiserver [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8] ...
	I1129 09:11:34.184390  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:34.219737  218317 logs.go:123] Gathering logs for kube-scheduler [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99] ...
	I1129 09:11:34.219778  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:36.772926  218317 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1129 09:11:36.773389  218317 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1129 09:11:36.773446  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:36.773508  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:36.807081  218317 cri.go:89] found id: "ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:36.807123  218317 cri.go:89] found id: ""
	I1129 09:11:36.807135  218317 logs.go:282] 1 containers: [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8]
	I1129 09:11:36.807200  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.812020  218317 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:36.812103  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:36.844173  218317 cri.go:89] found id: ""
	I1129 09:11:36.844199  218317 logs.go:282] 0 containers: []
	W1129 09:11:36.844212  218317 logs.go:284] No container was found matching "etcd"
	I1129 09:11:36.844219  218317 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:36.844277  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:36.875731  218317 cri.go:89] found id: ""
	I1129 09:11:36.875761  218317 logs.go:282] 0 containers: []
	W1129 09:11:36.875780  218317 logs.go:284] No container was found matching "coredns"
	I1129 09:11:36.875788  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:36.875863  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:36.906603  218317 cri.go:89] found id: "d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:36.906628  218317 cri.go:89] found id: ""
	I1129 09:11:36.906637  218317 logs.go:282] 1 containers: [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99]
	I1129 09:11:36.906695  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.910792  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:36.910889  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:36.942155  218317 cri.go:89] found id: ""
	I1129 09:11:36.942187  218317 logs.go:282] 0 containers: []
	W1129 09:11:36.942199  218317 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:11:36.942207  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:36.942269  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:36.971467  218317 cri.go:89] found id: "ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:36.971494  218317 cri.go:89] found id: ""
	I1129 09:11:36.971503  218317 logs.go:282] 1 containers: [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336]
	I1129 09:11:36.971571  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.975573  218317 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:36.975642  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:37.004655  218317 cri.go:89] found id: ""
	I1129 09:11:37.004685  218317 logs.go:282] 0 containers: []
	W1129 09:11:37.004694  218317 logs.go:284] No container was found matching "kindnet"
	I1129 09:11:37.004700  218317 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:37.004761  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:35.825079  214471 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1129 09:11:35.825592  214471 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1129 09:11:35.825668  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:35.825734  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:35.862164  214471 cri.go:89] found id: "1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d"
	I1129 09:11:35.862191  214471 cri.go:89] found id: ""
	I1129 09:11:35.862199  214471 logs.go:282] 1 containers: [1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d]
	I1129 09:11:35.862244  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:35.866190  214471 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:35.866246  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:35.902809  214471 cri.go:89] found id: "6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0"
	I1129 09:11:35.902835  214471 cri.go:89] found id: ""
	I1129 09:11:35.902857  214471 logs.go:282] 1 containers: [6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0]
	I1129 09:11:35.902914  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:35.906893  214471 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:35.906958  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:35.942933  214471 cri.go:89] found id: ""
	I1129 09:11:35.942967  214471 logs.go:282] 0 containers: []
	W1129 09:11:35.942976  214471 logs.go:284] No container was found matching "coredns"
	I1129 09:11:35.942982  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:35.943035  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:35.979802  214471 cri.go:89] found id: "fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad"
	I1129 09:11:35.979822  214471 cri.go:89] found id: "fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7"
	I1129 09:11:35.979826  214471 cri.go:89] found id: ""
	I1129 09:11:35.979833  214471 logs.go:282] 2 containers: [fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7]
	I1129 09:11:35.979918  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:35.984011  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:35.987811  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:35.987914  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:36.023943  214471 cri.go:89] found id: "31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3"
	I1129 09:11:36.023971  214471 cri.go:89] found id: "9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3"
	I1129 09:11:36.023977  214471 cri.go:89] found id: ""
	I1129 09:11:36.023985  214471 logs.go:282] 2 containers: [31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3 9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3]
	I1129 09:11:36.024035  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.028166  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.032192  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:36.032250  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:36.068481  214471 cri.go:89] found id: "33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322"
	I1129 09:11:36.068504  214471 cri.go:89] found id: ""
	I1129 09:11:36.068512  214471 logs.go:282] 1 containers: [33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322]
	I1129 09:11:36.068570  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.072940  214471 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:36.073007  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:36.109833  214471 cri.go:89] found id: "fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19"
	I1129 09:11:36.109873  214471 cri.go:89] found id: "ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194"
	I1129 09:11:36.109879  214471 cri.go:89] found id: ""
	I1129 09:11:36.109889  214471 logs.go:282] 2 containers: [fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19 ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194]
	I1129 09:11:36.109950  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.114175  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.118096  214471 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:36.118185  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:36.153699  214471 cri.go:89] found id: "d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b"
	I1129 09:11:36.153720  214471 cri.go:89] found id: ""
	I1129 09:11:36.153729  214471 logs.go:282] 1 containers: [d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b]
	I1129 09:11:36.153787  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:36.157903  214471 logs.go:123] Gathering logs for kindnet [fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19] ...
	I1129 09:11:36.157926  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19"
	I1129 09:11:36.202462  214471 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:36.202504  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:36.297721  214471 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:36.297763  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:36.360967  214471 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:36.360991  214471 logs.go:123] Gathering logs for kube-controller-manager [33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322] ...
	I1129 09:11:36.361003  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322"
	I1129 09:11:36.397247  214471 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:36.397278  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:36.460996  214471 logs.go:123] Gathering logs for container status ...
	I1129 09:11:36.461039  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:36.508700  214471 logs.go:123] Gathering logs for kube-proxy [31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3] ...
	I1129 09:11:36.508733  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3"
	I1129 09:11:36.561438  214471 logs.go:123] Gathering logs for kube-proxy [9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3] ...
	I1129 09:11:36.561480  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3"
	I1129 09:11:36.604632  214471 logs.go:123] Gathering logs for kindnet [ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194] ...
	I1129 09:11:36.604664  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194"
	I1129 09:11:36.649526  214471 logs.go:123] Gathering logs for storage-provisioner [d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b] ...
	I1129 09:11:36.649602  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b"
	I1129 09:11:36.696684  214471 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:36.696710  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:36.715307  214471 logs.go:123] Gathering logs for kube-apiserver [1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d] ...
	I1129 09:11:36.715345  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d"
	I1129 09:11:36.765667  214471 logs.go:123] Gathering logs for etcd [6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0] ...
	I1129 09:11:36.765704  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0"
	I1129 09:11:36.824292  214471 logs.go:123] Gathering logs for kube-scheduler [fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad] ...
	I1129 09:11:36.824335  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad"
	I1129 09:11:36.905231  214471 logs.go:123] Gathering logs for kube-scheduler [fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7] ...
	I1129 09:11:36.905266  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7"
	I1129 09:11:36.844746  237717 out.go:252] * Updating the running docker "pause-295501" container ...
	I1129 09:11:36.844788  237717 machine.go:94] provisionDockerMachine start ...
	I1129 09:11:36.844903  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:36.867948  237717 main.go:143] libmachine: Using SSH client type: native
	I1129 09:11:36.868294  237717 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1129 09:11:36.868319  237717 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:11:37.022477  237717 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-295501
	
	I1129 09:11:37.022513  237717 ubuntu.go:182] provisioning hostname "pause-295501"
	I1129 09:11:37.022594  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:37.046373  237717 main.go:143] libmachine: Using SSH client type: native
	I1129 09:11:37.046738  237717 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1129 09:11:37.046761  237717 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-295501 && echo "pause-295501" | sudo tee /etc/hostname
	I1129 09:11:37.214702  237717 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-295501
	
	I1129 09:11:37.214784  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:37.237618  237717 main.go:143] libmachine: Using SSH client type: native
	I1129 09:11:37.237976  237717 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1129 09:11:37.238003  237717 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-295501' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-295501/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-295501' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:11:37.393446  237717 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:11:37.393479  237717 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:11:37.393504  237717 ubuntu.go:190] setting up certificates
	I1129 09:11:37.393518  237717 provision.go:84] configureAuth start
	I1129 09:11:37.393579  237717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-295501
	I1129 09:11:37.416151  237717 provision.go:143] copyHostCerts
	I1129 09:11:37.416229  237717 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:11:37.416252  237717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:11:37.416345  237717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:11:37.416525  237717 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:11:37.416541  237717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:11:37.416584  237717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:11:37.416690  237717 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:11:37.416702  237717 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:11:37.416743  237717 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:11:37.416831  237717 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.pause-295501 san=[127.0.0.1 192.168.85.2 localhost minikube pause-295501]
	I1129 09:11:37.442322  237717 provision.go:177] copyRemoteCerts
	I1129 09:11:37.442392  237717 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:11:37.442441  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:37.462924  237717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/pause-295501/id_rsa Username:docker}
	I1129 09:11:37.570219  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1129 09:11:37.590962  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:11:37.611389  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:11:37.631201  237717 provision.go:87] duration metric: took 237.670256ms to configureAuth
	I1129 09:11:37.631238  237717 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:11:37.631466  237717 config.go:182] Loaded profile config "pause-295501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:11:37.631583  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:37.654340  237717 main.go:143] libmachine: Using SSH client type: native
	I1129 09:11:37.654674  237717 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1129 09:11:37.654709  237717 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:11:38.002511  237717 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:11:38.002538  237717 machine.go:97] duration metric: took 1.157738998s to provisionDockerMachine
	I1129 09:11:38.002549  237717 start.go:293] postStartSetup for "pause-295501" (driver="docker")
	I1129 09:11:38.002559  237717 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:11:38.002607  237717 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:11:38.002650  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:38.022540  237717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/pause-295501/id_rsa Username:docker}
	I1129 09:11:38.127016  237717 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:11:38.130956  237717 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:11:38.130986  237717 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:11:38.130996  237717 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:11:38.131043  237717 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:11:38.131112  237717 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:11:38.131213  237717 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:11:38.139910  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:11:38.159979  237717 start.go:296] duration metric: took 157.417141ms for postStartSetup
	I1129 09:11:38.160088  237717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:11:38.160139  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:38.179660  237717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/pause-295501/id_rsa Username:docker}
	I1129 09:11:38.280587  237717 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:11:38.285818  237717 fix.go:56] duration metric: took 1.465256722s for fixHost
	I1129 09:11:38.285873  237717 start.go:83] releasing machines lock for "pause-295501", held for 1.465336471s
	I1129 09:11:38.285958  237717 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-295501
	I1129 09:11:38.305486  237717 ssh_runner.go:195] Run: cat /version.json
	I1129 09:11:38.305551  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:38.305576  237717 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:11:38.305676  237717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-295501
	I1129 09:11:38.325651  237717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/pause-295501/id_rsa Username:docker}
	I1129 09:11:38.325993  237717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/pause-295501/id_rsa Username:docker}
	I1129 09:11:38.477537  237717 ssh_runner.go:195] Run: systemctl --version
	I1129 09:11:38.484701  237717 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:11:38.523899  237717 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:11:38.529083  237717 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:11:38.529157  237717 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:11:38.538226  237717 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:11:38.538256  237717 start.go:496] detecting cgroup driver to use...
	I1129 09:11:38.538291  237717 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:11:38.538338  237717 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:11:38.554313  237717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:11:38.568374  237717 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:11:38.568436  237717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:11:38.584933  237717 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:11:38.598824  237717 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:11:38.718143  237717 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:11:38.831183  237717 docker.go:234] disabling docker service ...
	I1129 09:11:38.831250  237717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:11:38.846685  237717 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:11:38.860696  237717 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:11:38.972575  237717 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:11:39.082673  237717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:11:39.096269  237717 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:11:39.112179  237717 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:11:39.112254  237717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:11:39.122438  237717 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:11:39.122515  237717 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:11:39.132764  237717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:11:39.142828  237717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:11:39.152940  237717 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:11:39.162385  237717 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:11:39.172436  237717 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:11:39.181909  237717 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:11:39.191543  237717 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:11:39.200199  237717 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:11:39.208660  237717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:11:39.318149  237717 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:11:39.517670  237717 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:11:39.517734  237717 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:11:39.522421  237717 start.go:564] Will wait 60s for crictl version
	I1129 09:11:39.522503  237717 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.526579  237717 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:11:39.553771  237717 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:11:39.553868  237717 ssh_runner.go:195] Run: crio --version
	I1129 09:11:39.586340  237717 ssh_runner.go:195] Run: crio --version
	I1129 09:11:39.621800  237717 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:11:39.622951  237717 cli_runner.go:164] Run: docker network inspect pause-295501 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:11:39.643640  237717 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 09:11:39.648153  237717 kubeadm.go:884] updating cluster {Name:pause-295501 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-295501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:11:39.648344  237717 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:11:39.648410  237717 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:11:39.685026  237717 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:11:39.685053  237717 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:11:39.685108  237717 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:11:39.714495  237717 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:11:39.714520  237717 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:11:39.714529  237717 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1129 09:11:39.714685  237717 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-295501 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-295501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:11:39.714769  237717 ssh_runner.go:195] Run: crio config
	I1129 09:11:39.767932  237717 cni.go:84] Creating CNI manager for ""
	I1129 09:11:39.767955  237717 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:11:39.767977  237717 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:11:39.768003  237717 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-295501 NodeName:pause-295501 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:11:39.768181  237717 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-295501"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:11:39.768252  237717 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:11:39.777229  237717 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:11:39.777309  237717 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:11:39.787769  237717 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1129 09:11:39.803706  237717 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:11:39.819138  237717 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1129 09:11:39.833675  237717 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:11:39.838158  237717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:11:39.956854  237717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:11:39.973108  237717 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501 for IP: 192.168.85.2
	I1129 09:11:39.973144  237717 certs.go:195] generating shared ca certs ...
	I1129 09:11:39.973166  237717 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:11:39.973359  237717 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:11:39.973427  237717 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:11:39.973442  237717 certs.go:257] generating profile certs ...
	I1129 09:11:39.973584  237717 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/client.key
	I1129 09:11:39.973668  237717 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/apiserver.key.a9383738
	I1129 09:11:39.973742  237717 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/proxy-client.key
	I1129 09:11:39.973953  237717 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:11:39.974022  237717 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:11:39.974037  237717 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:11:39.974087  237717 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:11:39.974129  237717 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:11:39.974181  237717 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:11:39.974244  237717 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:11:39.975176  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:11:39.997039  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:11:40.020614  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:11:40.044221  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:11:40.066434  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1129 09:11:40.089309  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:11:40.112148  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:11:40.134525  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:11:40.156083  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:11:40.178153  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:11:40.199367  237717 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:11:40.222169  237717 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:11:40.236967  237717 ssh_runner.go:195] Run: openssl version
	I1129 09:11:40.244348  237717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:11:40.255410  237717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:11:40.259982  237717 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:11:40.260043  237717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:11:40.307814  237717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:11:40.318727  237717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:11:40.329742  237717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:11:40.334663  237717 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:11:40.334737  237717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:11:40.380533  237717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:11:40.391212  237717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:11:40.402988  237717 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:11:40.407893  237717 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:11:40.407953  237717 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:11:40.455320  237717 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:11:40.465008  237717 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:11:40.469719  237717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:11:40.515488  237717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:11:40.566689  237717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:11:40.612408  237717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:11:40.658442  237717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:11:40.696605  237717 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:11:40.739318  237717 kubeadm.go:401] StartCluster: {Name:pause-295501 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-295501 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:11:40.739459  237717 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:11:40.739541  237717 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:11:40.771092  237717 cri.go:89] found id: "a8bbbdfd51dab82a8d563b57b00d7ca2194f16c69e6c25030b49a452fa24c721"
	I1129 09:11:40.771119  237717 cri.go:89] found id: "f78ddfecb898a10860947cd0138c0c91e432eb8133b9c1199e2378046faeefb6"
	I1129 09:11:40.771133  237717 cri.go:89] found id: "092bad9397a64f5518143503ccfeb6661abcd1fb66cf16f31703c648078497fe"
	I1129 09:11:40.771137  237717 cri.go:89] found id: "a2578393c04aeba954d20c8ac71275220f67953d2fd43b6e378182a2c47660e2"
	I1129 09:11:40.771140  237717 cri.go:89] found id: "a1cfbd7b390ca764419e13459208f92ccab83b9f560dab98155c947b575c9eb7"
	I1129 09:11:40.771143  237717 cri.go:89] found id: "45e32d34386d74e83368055caa5eb9f063ed6013f8c4cd7c8a1fbf290b1d66ef"
	I1129 09:11:40.771146  237717 cri.go:89] found id: "249d465154ffd1c8223cffce99d25f392334b77035bb4fd7a68ea732b0d1ffaa"
	I1129 09:11:40.771149  237717 cri.go:89] found id: ""
	I1129 09:11:40.771195  237717 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 09:11:40.785435  237717 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:11:40Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:11:40.785498  237717 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:11:40.794536  237717 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:11:40.794560  237717 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:11:40.794610  237717 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:11:40.803131  237717 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:11:40.803931  237717 kubeconfig.go:125] found "pause-295501" server: "https://192.168.85.2:8443"
	I1129 09:11:40.804998  237717 kapi.go:59] client config for pause-295501: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/client.crt", KeyFile:"/home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/client.key", CAFile:"/home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1129 09:11:40.805426  237717 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1129 09:11:40.805442  237717 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1129 09:11:40.805447  237717 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1129 09:11:40.805451  237717 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1129 09:11:40.805454  237717 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1129 09:11:40.805739  237717 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:11:40.814523  237717 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1129 09:11:40.814555  237717 kubeadm.go:602] duration metric: took 19.989735ms to restartPrimaryControlPlane
	I1129 09:11:40.814565  237717 kubeadm.go:403] duration metric: took 75.263822ms to StartCluster
	I1129 09:11:40.814579  237717 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:11:40.814656  237717 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:11:40.816348  237717 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:11:40.816652  237717 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:11:40.816763  237717 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:11:40.816919  237717 config.go:182] Loaded profile config "pause-295501": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:11:40.820757  237717 out.go:179] * Verifying Kubernetes components...
	I1129 09:11:40.820757  237717 out.go:179] * Enabled addons: 
	I1129 09:11:36.988515  219843 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:11:36.989050  219843 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1129 09:11:36.989115  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:36.989180  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:37.030202  219843 cri.go:89] found id: "c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83"
	I1129 09:11:37.030229  219843 cri.go:89] found id: ""
	I1129 09:11:37.030240  219843 logs.go:282] 1 containers: [c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83]
	I1129 09:11:37.030309  219843 ssh_runner.go:195] Run: which crictl
	I1129 09:11:37.034740  219843 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:37.034808  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:37.077093  219843 cri.go:89] found id: ""
	I1129 09:11:37.077126  219843 logs.go:282] 0 containers: []
	W1129 09:11:37.077137  219843 logs.go:284] No container was found matching "etcd"
	I1129 09:11:37.077146  219843 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:37.077214  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:37.120049  219843 cri.go:89] found id: ""
	I1129 09:11:37.120080  219843 logs.go:282] 0 containers: []
	W1129 09:11:37.120091  219843 logs.go:284] No container was found matching "coredns"
	I1129 09:11:37.120099  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:37.120169  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:37.159793  219843 cri.go:89] found id: "1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9"
	I1129 09:11:37.159819  219843 cri.go:89] found id: ""
	I1129 09:11:37.159830  219843 logs.go:282] 1 containers: [1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9]
	I1129 09:11:37.159912  219843 ssh_runner.go:195] Run: which crictl
	I1129 09:11:37.164147  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:37.164229  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:37.203261  219843 cri.go:89] found id: ""
	I1129 09:11:37.203293  219843 logs.go:282] 0 containers: []
	W1129 09:11:37.203317  219843 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:11:37.203326  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:37.203389  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:37.245966  219843 cri.go:89] found id: "e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116"
	I1129 09:11:37.245991  219843 cri.go:89] found id: ""
	I1129 09:11:37.246002  219843 logs.go:282] 1 containers: [e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116]
	I1129 09:11:37.246077  219843 ssh_runner.go:195] Run: which crictl
	I1129 09:11:37.250685  219843 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:37.250767  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:37.294903  219843 cri.go:89] found id: ""
	I1129 09:11:37.294930  219843 logs.go:282] 0 containers: []
	W1129 09:11:37.294938  219843 logs.go:284] No container was found matching "kindnet"
	I1129 09:11:37.294944  219843 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:37.295001  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:37.335264  219843 cri.go:89] found id: ""
	I1129 09:11:37.335296  219843 logs.go:282] 0 containers: []
	W1129 09:11:37.335311  219843 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:11:37.335323  219843 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:37.335339  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:37.411900  219843 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:37.411923  219843 logs.go:123] Gathering logs for kube-apiserver [c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83] ...
	I1129 09:11:37.411937  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83"
	I1129 09:11:37.455613  219843 logs.go:123] Gathering logs for kube-scheduler [1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9] ...
	I1129 09:11:37.455644  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9"
	I1129 09:11:37.531181  219843 logs.go:123] Gathering logs for kube-controller-manager [e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116] ...
	I1129 09:11:37.531224  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116"
	I1129 09:11:37.569892  219843 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:37.569927  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:37.616758  219843 logs.go:123] Gathering logs for container status ...
	I1129 09:11:37.616793  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:37.659511  219843 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:37.659538  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:37.749494  219843 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:37.749535  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:40.268097  219843 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:11:40.268604  219843 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1129 09:11:40.268671  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:40.268733  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:40.313690  219843 cri.go:89] found id: "c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83"
	I1129 09:11:40.313717  219843 cri.go:89] found id: ""
	I1129 09:11:40.313728  219843 logs.go:282] 1 containers: [c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83]
	I1129 09:11:40.313792  219843 ssh_runner.go:195] Run: which crictl
	I1129 09:11:40.318223  219843 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:40.318292  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:40.363176  219843 cri.go:89] found id: ""
	I1129 09:11:40.363203  219843 logs.go:282] 0 containers: []
	W1129 09:11:40.363214  219843 logs.go:284] No container was found matching "etcd"
	I1129 09:11:40.363221  219843 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:40.363278  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:40.406516  219843 cri.go:89] found id: ""
	I1129 09:11:40.406544  219843 logs.go:282] 0 containers: []
	W1129 09:11:40.406578  219843 logs.go:284] No container was found matching "coredns"
	I1129 09:11:40.406591  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:40.406652  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:40.448896  219843 cri.go:89] found id: "1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9"
	I1129 09:11:40.448922  219843 cri.go:89] found id: ""
	I1129 09:11:40.448932  219843 logs.go:282] 1 containers: [1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9]
	I1129 09:11:40.448995  219843 ssh_runner.go:195] Run: which crictl
	I1129 09:11:40.454514  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:40.454607  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:40.497404  219843 cri.go:89] found id: ""
	I1129 09:11:40.497432  219843 logs.go:282] 0 containers: []
	W1129 09:11:40.497445  219843 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:11:40.497453  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:40.497515  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:40.543655  219843 cri.go:89] found id: "e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116"
	I1129 09:11:40.543678  219843 cri.go:89] found id: ""
	I1129 09:11:40.543687  219843 logs.go:282] 1 containers: [e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116]
	I1129 09:11:40.543749  219843 ssh_runner.go:195] Run: which crictl
	I1129 09:11:40.548165  219843 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:40.548242  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:40.590718  219843 cri.go:89] found id: ""
	I1129 09:11:40.590744  219843 logs.go:282] 0 containers: []
	W1129 09:11:40.590755  219843 logs.go:284] No container was found matching "kindnet"
	I1129 09:11:40.590763  219843 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:40.590824  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:40.633221  219843 cri.go:89] found id: ""
	I1129 09:11:40.633248  219843 logs.go:282] 0 containers: []
	W1129 09:11:40.633259  219843 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:11:40.633274  219843 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:40.633290  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:40.701716  219843 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:40.701736  219843 logs.go:123] Gathering logs for kube-apiserver [c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83] ...
	I1129 09:11:40.701749  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83"
	I1129 09:11:40.745170  219843 logs.go:123] Gathering logs for kube-scheduler [1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9] ...
	I1129 09:11:40.745202  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9"
	I1129 09:11:40.819819  219843 logs.go:123] Gathering logs for kube-controller-manager [e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116] ...
	I1129 09:11:40.819865  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116"
	I1129 09:11:40.865441  219843 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:40.865480  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:40.919051  219843 logs.go:123] Gathering logs for container status ...
	I1129 09:11:40.919115  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:40.960875  219843 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:40.960907  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:41.058946  219843 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:41.058993  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:40.821826  237717 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:11:40.821828  237717 addons.go:530] duration metric: took 5.075699ms for enable addons: enabled=[]
	I1129 09:11:40.945713  237717 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:11:40.961579  237717 node_ready.go:35] waiting up to 6m0s for node "pause-295501" to be "Ready" ...
	I1129 09:11:40.969379  237717 node_ready.go:49] node "pause-295501" is "Ready"
	I1129 09:11:40.969413  237717 node_ready.go:38] duration metric: took 7.804382ms for node "pause-295501" to be "Ready" ...
	I1129 09:11:40.969428  237717 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:11:40.969484  237717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:11:40.983937  237717 api_server.go:72] duration metric: took 167.245161ms to wait for apiserver process to appear ...
	I1129 09:11:40.983968  237717 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:11:40.983991  237717 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:11:40.989114  237717 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1129 09:11:40.990011  237717 api_server.go:141] control plane version: v1.34.1
	I1129 09:11:40.990039  237717 api_server.go:131] duration metric: took 6.06432ms to wait for apiserver health ...
	I1129 09:11:40.990049  237717 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:11:40.993045  237717 system_pods.go:59] 7 kube-system pods found
	I1129 09:11:40.993072  237717 system_pods.go:61] "coredns-66bc5c9577-zwrqh" [22030a4d-3d80-4ff5-b4ff-245caa1db156] Running
	I1129 09:11:40.993078  237717 system_pods.go:61] "etcd-pause-295501" [8a11dab1-cbef-4fd6-b149-0c9e2408f284] Running
	I1129 09:11:40.993082  237717 system_pods.go:61] "kindnet-st2fs" [a6934cc7-9fdf-4551-bfd6-b001f95eb4f2] Running
	I1129 09:11:40.993085  237717 system_pods.go:61] "kube-apiserver-pause-295501" [9c390dfe-6ae0-484b-99ad-306b1178b990] Running
	I1129 09:11:40.993088  237717 system_pods.go:61] "kube-controller-manager-pause-295501" [6155afce-9251-4e9f-acf5-8e9d6099f2a6] Running
	I1129 09:11:40.993091  237717 system_pods.go:61] "kube-proxy-f4kr8" [049b4663-ac1a-4dfc-9ab7-7060baa838e6] Running
	I1129 09:11:40.993094  237717 system_pods.go:61] "kube-scheduler-pause-295501" [c1e85599-cb4d-4518-be61-1d19147cd2e6] Running
	I1129 09:11:40.993100  237717 system_pods.go:74] duration metric: took 3.04543ms to wait for pod list to return data ...
	I1129 09:11:40.993107  237717 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:11:40.995365  237717 default_sa.go:45] found service account: "default"
	I1129 09:11:40.995398  237717 default_sa.go:55] duration metric: took 2.284204ms for default service account to be created ...
	I1129 09:11:40.995408  237717 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:11:40.998437  237717 system_pods.go:86] 7 kube-system pods found
	I1129 09:11:40.998469  237717 system_pods.go:89] "coredns-66bc5c9577-zwrqh" [22030a4d-3d80-4ff5-b4ff-245caa1db156] Running
	I1129 09:11:40.998479  237717 system_pods.go:89] "etcd-pause-295501" [8a11dab1-cbef-4fd6-b149-0c9e2408f284] Running
	I1129 09:11:40.998485  237717 system_pods.go:89] "kindnet-st2fs" [a6934cc7-9fdf-4551-bfd6-b001f95eb4f2] Running
	I1129 09:11:40.998490  237717 system_pods.go:89] "kube-apiserver-pause-295501" [9c390dfe-6ae0-484b-99ad-306b1178b990] Running
	I1129 09:11:40.998495  237717 system_pods.go:89] "kube-controller-manager-pause-295501" [6155afce-9251-4e9f-acf5-8e9d6099f2a6] Running
	I1129 09:11:40.998500  237717 system_pods.go:89] "kube-proxy-f4kr8" [049b4663-ac1a-4dfc-9ab7-7060baa838e6] Running
	I1129 09:11:40.998505  237717 system_pods.go:89] "kube-scheduler-pause-295501" [c1e85599-cb4d-4518-be61-1d19147cd2e6] Running
	I1129 09:11:40.998520  237717 system_pods.go:126] duration metric: took 3.105692ms to wait for k8s-apps to be running ...
	I1129 09:11:40.998532  237717 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:11:40.998586  237717 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:11:41.012566  237717 system_svc.go:56] duration metric: took 14.026429ms WaitForService to wait for kubelet
	I1129 09:11:41.012593  237717 kubeadm.go:587] duration metric: took 195.907676ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:11:41.012609  237717 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:11:41.015321  237717 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:11:41.015368  237717 node_conditions.go:123] node cpu capacity is 8
	I1129 09:11:41.015390  237717 node_conditions.go:105] duration metric: took 2.773198ms to run NodePressure ...
	I1129 09:11:41.015404  237717 start.go:242] waiting for startup goroutines ...
	I1129 09:11:41.015413  237717 start.go:247] waiting for cluster config update ...
	I1129 09:11:41.015423  237717 start.go:256] writing updated cluster config ...
	I1129 09:11:41.015771  237717 ssh_runner.go:195] Run: rm -f paused
	I1129 09:11:41.020301  237717 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:11:41.021304  237717 kapi.go:59] client config for pause-295501: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/client.crt", KeyFile:"/home/jenkins/minikube-integration/22000-5652/.minikube/profiles/pause-295501/client.key", CAFile:"/home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1129 09:11:41.024522  237717 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zwrqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:41.029342  237717 pod_ready.go:94] pod "coredns-66bc5c9577-zwrqh" is "Ready"
	I1129 09:11:41.029380  237717 pod_ready.go:86] duration metric: took 4.832112ms for pod "coredns-66bc5c9577-zwrqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:41.031624  237717 pod_ready.go:83] waiting for pod "etcd-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:41.036104  237717 pod_ready.go:94] pod "etcd-pause-295501" is "Ready"
	I1129 09:11:41.036132  237717 pod_ready.go:86] duration metric: took 4.479394ms for pod "etcd-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:41.038343  237717 pod_ready.go:83] waiting for pod "kube-apiserver-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:41.042711  237717 pod_ready.go:94] pod "kube-apiserver-pause-295501" is "Ready"
	I1129 09:11:41.042736  237717 pod_ready.go:86] duration metric: took 4.370525ms for pod "kube-apiserver-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:41.044936  237717 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:41.424440  237717 pod_ready.go:94] pod "kube-controller-manager-pause-295501" is "Ready"
	I1129 09:11:41.424472  237717 pod_ready.go:86] duration metric: took 379.512342ms for pod "kube-controller-manager-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:37.038118  218317 cri.go:89] found id: ""
	I1129 09:11:37.038146  218317 logs.go:282] 0 containers: []
	W1129 09:11:37.038157  218317 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:11:37.038170  218317 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:37.038186  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:37.132553  218317 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:37.132587  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:37.148650  218317 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:37.148677  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:37.213617  218317 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:37.213641  218317 logs.go:123] Gathering logs for kube-apiserver [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8] ...
	I1129 09:11:37.213657  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:37.252650  218317 logs.go:123] Gathering logs for kube-scheduler [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99] ...
	I1129 09:11:37.252682  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:37.310092  218317 logs.go:123] Gathering logs for kube-controller-manager [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336] ...
	I1129 09:11:37.310139  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:37.341690  218317 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:37.341721  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:37.407424  218317 logs.go:123] Gathering logs for container status ...
	I1129 09:11:37.407477  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:39.943945  218317 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1129 09:11:39.944425  218317 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1129 09:11:39.944472  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:39.944523  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:39.978366  218317 cri.go:89] found id: "ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:39.978395  218317 cri.go:89] found id: ""
	I1129 09:11:39.978406  218317 logs.go:282] 1 containers: [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8]
	I1129 09:11:39.978469  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.982571  218317 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:39.982646  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:40.014565  218317 cri.go:89] found id: ""
	I1129 09:11:40.014595  218317 logs.go:282] 0 containers: []
	W1129 09:11:40.014605  218317 logs.go:284] No container was found matching "etcd"
	I1129 09:11:40.014613  218317 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:40.014679  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:40.045211  218317 cri.go:89] found id: ""
	I1129 09:11:40.045241  218317 logs.go:282] 0 containers: []
	W1129 09:11:40.045253  218317 logs.go:284] No container was found matching "coredns"
	I1129 09:11:40.045261  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:40.045325  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:40.077112  218317 cri.go:89] found id: "d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:40.077140  218317 cri.go:89] found id: ""
	I1129 09:11:40.077151  218317 logs.go:282] 1 containers: [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99]
	I1129 09:11:40.077216  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:40.081894  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:40.081969  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:40.113487  218317 cri.go:89] found id: ""
	I1129 09:11:40.113511  218317 logs.go:282] 0 containers: []
	W1129 09:11:40.113521  218317 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:11:40.113529  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:40.113588  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:40.143488  218317 cri.go:89] found id: "ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:40.143513  218317 cri.go:89] found id: ""
	I1129 09:11:40.143522  218317 logs.go:282] 1 containers: [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336]
	I1129 09:11:40.143573  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:40.148170  218317 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:40.148249  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:40.179901  218317 cri.go:89] found id: ""
	I1129 09:11:40.179928  218317 logs.go:282] 0 containers: []
	W1129 09:11:40.179938  218317 logs.go:284] No container was found matching "kindnet"
	I1129 09:11:40.179946  218317 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:40.180004  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:40.210247  218317 cri.go:89] found id: ""
	I1129 09:11:40.210270  218317 logs.go:282] 0 containers: []
	W1129 09:11:40.210277  218317 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:11:40.210286  218317 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:40.210298  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:40.226470  218317 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:40.226505  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:40.292217  218317 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:40.292243  218317 logs.go:123] Gathering logs for kube-apiserver [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8] ...
	I1129 09:11:40.292258  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:40.332431  218317 logs.go:123] Gathering logs for kube-scheduler [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99] ...
	I1129 09:11:40.332464  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:40.400115  218317 logs.go:123] Gathering logs for kube-controller-manager [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336] ...
	I1129 09:11:40.400153  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:40.434697  218317 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:40.434724  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:40.502709  218317 logs.go:123] Gathering logs for container status ...
	I1129 09:11:40.502750  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:40.544897  218317 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:40.544929  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:41.624451  237717 pod_ready.go:83] waiting for pod "kube-proxy-f4kr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:42.024439  237717 pod_ready.go:94] pod "kube-proxy-f4kr8" is "Ready"
	I1129 09:11:42.024466  237717 pod_ready.go:86] duration metric: took 399.988573ms for pod "kube-proxy-f4kr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:42.224669  237717 pod_ready.go:83] waiting for pod "kube-scheduler-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:42.625036  237717 pod_ready.go:94] pod "kube-scheduler-pause-295501" is "Ready"
	I1129 09:11:42.625065  237717 pod_ready.go:86] duration metric: took 400.370045ms for pod "kube-scheduler-pause-295501" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:11:42.625078  237717 pod_ready.go:40] duration metric: took 1.604733434s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:11:42.670228  237717 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:11:42.671921  237717 out.go:179] * Done! kubectl is now configured to use "pause-295501" cluster and "default" namespace by default
	I1129 09:11:39.455759  214471 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1129 09:11:39.456221  214471 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1129 09:11:39.456269  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:39.456319  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:39.493639  214471 cri.go:89] found id: "1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d"
	I1129 09:11:39.493666  214471 cri.go:89] found id: ""
	I1129 09:11:39.493677  214471 logs.go:282] 1 containers: [1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d]
	I1129 09:11:39.493742  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.497836  214471 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:39.497921  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:39.535560  214471 cri.go:89] found id: "6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0"
	I1129 09:11:39.535582  214471 cri.go:89] found id: ""
	I1129 09:11:39.535603  214471 logs.go:282] 1 containers: [6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0]
	I1129 09:11:39.535662  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.539813  214471 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:39.539911  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:39.580314  214471 cri.go:89] found id: ""
	I1129 09:11:39.580337  214471 logs.go:282] 0 containers: []
	W1129 09:11:39.580355  214471 logs.go:284] No container was found matching "coredns"
	I1129 09:11:39.580360  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:39.580480  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:39.620199  214471 cri.go:89] found id: "fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad"
	I1129 09:11:39.620223  214471 cri.go:89] found id: "fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7"
	I1129 09:11:39.620230  214471 cri.go:89] found id: ""
	I1129 09:11:39.620240  214471 logs.go:282] 2 containers: [fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7]
	I1129 09:11:39.620292  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.624913  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.628828  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:39.628912  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:39.666984  214471 cri.go:89] found id: "31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3"
	I1129 09:11:39.667009  214471 cri.go:89] found id: "9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3"
	I1129 09:11:39.667015  214471 cri.go:89] found id: ""
	I1129 09:11:39.667028  214471 logs.go:282] 2 containers: [31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3 9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3]
	I1129 09:11:39.667087  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.671862  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.676989  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:39.677069  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:39.717467  214471 cri.go:89] found id: "33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322"
	I1129 09:11:39.717486  214471 cri.go:89] found id: ""
	I1129 09:11:39.717499  214471 logs.go:282] 1 containers: [33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322]
	I1129 09:11:39.717549  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.721428  214471 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:39.721505  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:39.762828  214471 cri.go:89] found id: "fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19"
	I1129 09:11:39.762873  214471 cri.go:89] found id: "ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194"
	I1129 09:11:39.762879  214471 cri.go:89] found id: ""
	I1129 09:11:39.762889  214471 logs.go:282] 2 containers: [fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19 ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194]
	I1129 09:11:39.762951  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.767631  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.771612  214471 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:39.771670  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:39.811684  214471 cri.go:89] found id: "d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b"
	I1129 09:11:39.811711  214471 cri.go:89] found id: ""
	I1129 09:11:39.811721  214471 logs.go:282] 1 containers: [d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b]
	I1129 09:11:39.811783  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:39.815823  214471 logs.go:123] Gathering logs for container status ...
	I1129 09:11:39.815865  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:39.857558  214471 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:39.857588  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:39.882667  214471 logs.go:123] Gathering logs for kube-apiserver [1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d] ...
	I1129 09:11:39.882716  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d"
	I1129 09:11:39.923530  214471 logs.go:123] Gathering logs for kube-proxy [9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3] ...
	I1129 09:11:39.923567  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3"
	I1129 09:11:39.964583  214471 logs.go:123] Gathering logs for kindnet [fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19] ...
	I1129 09:11:39.964621  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19"
	I1129 09:11:40.020586  214471 logs.go:123] Gathering logs for storage-provisioner [d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b] ...
	I1129 09:11:40.020623  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b"
	I1129 09:11:40.064811  214471 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:40.064859  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:40.167022  214471 logs.go:123] Gathering logs for kube-scheduler [fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7] ...
	I1129 09:11:40.167058  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7"
	I1129 09:11:40.221232  214471 logs.go:123] Gathering logs for kube-proxy [31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3] ...
	I1129 09:11:40.221274  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3"
	I1129 09:11:40.271434  214471 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:40.271464  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:40.350824  214471 logs.go:123] Gathering logs for kube-scheduler [fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad] ...
	I1129 09:11:40.350875  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad"
	I1129 09:11:40.438732  214471 logs.go:123] Gathering logs for kube-controller-manager [33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322] ...
	I1129 09:11:40.438765  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322"
	I1129 09:11:40.486213  214471 logs.go:123] Gathering logs for kindnet [ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194] ...
	I1129 09:11:40.486244  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194"
	I1129 09:11:40.532247  214471 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:40.532274  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:40.608168  214471 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:40.608193  214471 logs.go:123] Gathering logs for etcd [6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0] ...
	I1129 09:11:40.608212  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0"
	I1129 09:11:43.168886  214471 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1129 09:11:43.169492  214471 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1129 09:11:43.169561  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:43.169627  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:43.213215  214471 cri.go:89] found id: "1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d"
	I1129 09:11:43.213242  214471 cri.go:89] found id: ""
	I1129 09:11:43.213252  214471 logs.go:282] 1 containers: [1dff8abbda684fe8a803a0d7feee0d46229689119dc2199e9c5b4d859caf176d]
	I1129 09:11:43.213316  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.218373  214471 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:43.218457  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:43.257223  214471 cri.go:89] found id: "6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0"
	I1129 09:11:43.257245  214471 cri.go:89] found id: ""
	I1129 09:11:43.257253  214471 logs.go:282] 1 containers: [6cdd44d8db846ffb376bf831123c4ce432f87f0616753fa1802db2474c6da5e0]
	I1129 09:11:43.257308  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.261910  214471 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:43.261990  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:43.302140  214471 cri.go:89] found id: ""
	I1129 09:11:43.302172  214471 logs.go:282] 0 containers: []
	W1129 09:11:43.302194  214471 logs.go:284] No container was found matching "coredns"
	I1129 09:11:43.302203  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:43.302267  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:43.339478  214471 cri.go:89] found id: "fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad"
	I1129 09:11:43.339497  214471 cri.go:89] found id: "fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7"
	I1129 09:11:43.339501  214471 cri.go:89] found id: ""
	I1129 09:11:43.339508  214471 logs.go:282] 2 containers: [fcff0da386e90a7e2da6915329552d3b7f08edc5e745bc390cd94526235e34ad fc7be3672b1d7cde9d3cd515c0ebafb17a2ff060a0fd47dbbb990b57cde6c4a7]
	I1129 09:11:43.339549  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.343868  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.347769  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:43.347851  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:43.385385  214471 cri.go:89] found id: "31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3"
	I1129 09:11:43.385412  214471 cri.go:89] found id: "9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3"
	I1129 09:11:43.385418  214471 cri.go:89] found id: ""
	I1129 09:11:43.385428  214471 logs.go:282] 2 containers: [31d6025ff82b3e085a9904c2d696e3cd87b765553852bee59bf02c5ca2982bb3 9bc5d76fdb0a28cd329b1051fec59088325c606b89a8b925d3ab5b1519649cc3]
	I1129 09:11:43.385480  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.389543  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.394281  214471 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:43.394352  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:43.432680  214471 cri.go:89] found id: "33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322"
	I1129 09:11:43.432707  214471 cri.go:89] found id: ""
	I1129 09:11:43.432717  214471 logs.go:282] 1 containers: [33f91ffe40a0d328d35ddf61661372bb263a7bc34d24185bd700d35146f1a322]
	I1129 09:11:43.432778  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.437328  214471 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:43.437406  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:43.477920  214471 cri.go:89] found id: "fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19"
	I1129 09:11:43.477949  214471 cri.go:89] found id: "ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194"
	I1129 09:11:43.477955  214471 cri.go:89] found id: ""
	I1129 09:11:43.477964  214471 logs.go:282] 2 containers: [fdaf7e1d1f92f5357e2a36a658d6c56cb5e970830f15eb88888d800f36574e19 ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194]
	I1129 09:11:43.478019  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.482024  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.486067  214471 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:43.486140  214471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:43.534548  214471 cri.go:89] found id: "d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b"
	I1129 09:11:43.534576  214471 cri.go:89] found id: ""
	I1129 09:11:43.534587  214471 logs.go:282] 1 containers: [d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b]
	I1129 09:11:43.534644  214471 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.539198  214471 logs.go:123] Gathering logs for storage-provisioner [d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b] ...
	I1129 09:11:43.539223  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d29d7545c01a5541cd53753145eb6b8f887d1ca4f6e2c856d793a2af6f0f120b"
	I1129 09:11:43.578682  214471 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:43.578711  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:43.654353  214471 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:43.654388  214471 logs.go:123] Gathering logs for kindnet [ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194] ...
	I1129 09:11:43.654403  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec138943901d9c0c58c204789d15a0d1d6edad3161a00469e53b5fd0908fe194"
	I1129 09:11:43.697226  214471 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:43.697315  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:43.807952  214471 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:43.808004  214471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:43.576930  219843 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:11:43.577372  219843 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1129 09:11:43.577431  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:43.577487  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:43.618478  219843 cri.go:89] found id: "c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83"
	I1129 09:11:43.618499  219843 cri.go:89] found id: ""
	I1129 09:11:43.618508  219843 logs.go:282] 1 containers: [c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83]
	I1129 09:11:43.618566  219843 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.623015  219843 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:43.623078  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:43.671123  219843 cri.go:89] found id: ""
	I1129 09:11:43.671144  219843 logs.go:282] 0 containers: []
	W1129 09:11:43.671152  219843 logs.go:284] No container was found matching "etcd"
	I1129 09:11:43.671157  219843 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:43.671203  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:43.719256  219843 cri.go:89] found id: ""
	I1129 09:11:43.719284  219843 logs.go:282] 0 containers: []
	W1129 09:11:43.719295  219843 logs.go:284] No container was found matching "coredns"
	I1129 09:11:43.719304  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:43.719368  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:43.760633  219843 cri.go:89] found id: "1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9"
	I1129 09:11:43.760663  219843 cri.go:89] found id: ""
	I1129 09:11:43.760675  219843 logs.go:282] 1 containers: [1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9]
	I1129 09:11:43.760741  219843 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.765110  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:43.765183  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:43.808223  219843 cri.go:89] found id: ""
	I1129 09:11:43.808250  219843 logs.go:282] 0 containers: []
	W1129 09:11:43.808262  219843 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:11:43.808273  219843 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:43.808340  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:43.847195  219843 cri.go:89] found id: "e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116"
	I1129 09:11:43.847224  219843 cri.go:89] found id: ""
	I1129 09:11:43.847237  219843 logs.go:282] 1 containers: [e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116]
	I1129 09:11:43.847303  219843 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.851298  219843 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:43.851370  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:43.889428  219843 cri.go:89] found id: ""
	I1129 09:11:43.889456  219843 logs.go:282] 0 containers: []
	W1129 09:11:43.889466  219843 logs.go:284] No container was found matching "kindnet"
	I1129 09:11:43.889474  219843 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:43.889531  219843 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:43.931145  219843 cri.go:89] found id: ""
	I1129 09:11:43.931171  219843 logs.go:282] 0 containers: []
	W1129 09:11:43.931180  219843 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:11:43.931191  219843 logs.go:123] Gathering logs for kube-apiserver [c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83] ...
	I1129 09:11:43.931209  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c401851db3b97d96a64ca91142bb25ffcb7394406acd89ab1c6ee91b28883f83"
	I1129 09:11:43.971702  219843 logs.go:123] Gathering logs for kube-scheduler [1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9] ...
	I1129 09:11:43.971733  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c4cc46e5af366e3fd498dd73ec8725595c451cd6cfe29b8e25688ac13578fc9"
	I1129 09:11:44.056778  219843 logs.go:123] Gathering logs for kube-controller-manager [e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116] ...
	I1129 09:11:44.056820  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e88d98b3aacbd0d3318bd26614fb599cf82b0f04661b694435b4e58ea4629116"
	I1129 09:11:44.096805  219843 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:44.096831  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:44.151710  219843 logs.go:123] Gathering logs for container status ...
	I1129 09:11:44.151748  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:44.194898  219843 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:44.194937  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:44.306431  219843 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:44.306471  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:44.322998  219843 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:44.323034  219843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:44.390400  219843 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:43.149465  218317 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1129 09:11:43.149925  218317 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1129 09:11:43.149979  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:43.150039  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:43.182457  218317 cri.go:89] found id: "ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:43.182475  218317 cri.go:89] found id: ""
	I1129 09:11:43.182484  218317 logs.go:282] 1 containers: [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8]
	I1129 09:11:43.182543  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.186885  218317 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:43.186955  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:43.219932  218317 cri.go:89] found id: ""
	I1129 09:11:43.219956  218317 logs.go:282] 0 containers: []
	W1129 09:11:43.219966  218317 logs.go:284] No container was found matching "etcd"
	I1129 09:11:43.219973  218317 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:43.220027  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:43.249244  218317 cri.go:89] found id: ""
	I1129 09:11:43.249274  218317 logs.go:282] 0 containers: []
	W1129 09:11:43.249406  218317 logs.go:284] No container was found matching "coredns"
	I1129 09:11:43.249424  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:43.249498  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:43.279344  218317 cri.go:89] found id: "d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:43.279372  218317 cri.go:89] found id: ""
	I1129 09:11:43.279383  218317 logs.go:282] 1 containers: [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99]
	I1129 09:11:43.279446  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.283673  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:43.283753  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:43.313988  218317 cri.go:89] found id: ""
	I1129 09:11:43.314016  218317 logs.go:282] 0 containers: []
	W1129 09:11:43.314027  218317 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:11:43.314034  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:43.314096  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:43.345045  218317 cri.go:89] found id: "ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:43.345068  218317 cri.go:89] found id: ""
	I1129 09:11:43.345077  218317 logs.go:282] 1 containers: [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336]
	I1129 09:11:43.345137  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:43.349154  218317 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:43.349220  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:43.380601  218317 cri.go:89] found id: ""
	I1129 09:11:43.380631  218317 logs.go:282] 0 containers: []
	W1129 09:11:43.380643  218317 logs.go:284] No container was found matching "kindnet"
	I1129 09:11:43.380651  218317 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:43.380718  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:43.409783  218317 cri.go:89] found id: ""
	I1129 09:11:43.409812  218317 logs.go:282] 0 containers: []
	W1129 09:11:43.409824  218317 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:11:43.409835  218317 logs.go:123] Gathering logs for kube-controller-manager [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336] ...
	I1129 09:11:43.409873  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:43.440300  218317 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:43.440325  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:43.499785  218317 logs.go:123] Gathering logs for container status ...
	I1129 09:11:43.499832  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:43.541652  218317 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:43.541677  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:43.648387  218317 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:43.648430  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:43.668168  218317 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:43.668206  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:43.741752  218317 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:11:43.741774  218317 logs.go:123] Gathering logs for kube-apiserver [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8] ...
	I1129 09:11:43.741788  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:43.781218  218317 logs.go:123] Gathering logs for kube-scheduler [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99] ...
	I1129 09:11:43.781252  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:46.335231  218317 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1129 09:11:46.335635  218317 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1129 09:11:46.335685  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:11:46.335737  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:11:46.364228  218317 cri.go:89] found id: "ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:46.364249  218317 cri.go:89] found id: ""
	I1129 09:11:46.364256  218317 logs.go:282] 1 containers: [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8]
	I1129 09:11:46.364306  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:46.368318  218317 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:11:46.368385  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:11:46.395528  218317 cri.go:89] found id: ""
	I1129 09:11:46.395555  218317 logs.go:282] 0 containers: []
	W1129 09:11:46.395566  218317 logs.go:284] No container was found matching "etcd"
	I1129 09:11:46.395574  218317 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:11:46.395630  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:11:46.422300  218317 cri.go:89] found id: ""
	I1129 09:11:46.422330  218317 logs.go:282] 0 containers: []
	W1129 09:11:46.422341  218317 logs.go:284] No container was found matching "coredns"
	I1129 09:11:46.422349  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:11:46.422423  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:11:46.458296  218317 cri.go:89] found id: "d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:46.458320  218317 cri.go:89] found id: ""
	I1129 09:11:46.458330  218317 logs.go:282] 1 containers: [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99]
	I1129 09:11:46.458390  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:46.463091  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:11:46.463170  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:11:46.493800  218317 cri.go:89] found id: ""
	I1129 09:11:46.493832  218317 logs.go:282] 0 containers: []
	W1129 09:11:46.493856  218317 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:11:46.493864  218317 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:11:46.493915  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:11:46.523380  218317 cri.go:89] found id: "ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:46.523414  218317 cri.go:89] found id: ""
	I1129 09:11:46.523422  218317 logs.go:282] 1 containers: [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336]
	I1129 09:11:46.523478  218317 ssh_runner.go:195] Run: which crictl
	I1129 09:11:46.527692  218317 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:11:46.527749  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:11:46.557805  218317 cri.go:89] found id: ""
	I1129 09:11:46.557836  218317 logs.go:282] 0 containers: []
	W1129 09:11:46.557874  218317 logs.go:284] No container was found matching "kindnet"
	I1129 09:11:46.557882  218317 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:11:46.557942  218317 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:11:46.586162  218317 cri.go:89] found id: ""
	I1129 09:11:46.586191  218317 logs.go:282] 0 containers: []
	W1129 09:11:46.586201  218317 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:11:46.586213  218317 logs.go:123] Gathering logs for kube-apiserver [ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8] ...
	I1129 09:11:46.586230  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ad97145496ded2f71cd6c4470059d5cc8eba68fc6ea8dd528688b83ade8c4db8"
	I1129 09:11:46.621284  218317 logs.go:123] Gathering logs for kube-scheduler [d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99] ...
	I1129 09:11:46.621320  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d36b98de5d776212c9777ea4840006b8e9bc445d70128594889132af99755d99"
	I1129 09:11:46.677440  218317 logs.go:123] Gathering logs for kube-controller-manager [ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336] ...
	I1129 09:11:46.677486  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ead086a0104bf53c59ed5d8e223ccbb7be900745b5ce874ae12a9c40c089d336"
	I1129 09:11:46.707201  218317 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:11:46.707234  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:11:46.762106  218317 logs.go:123] Gathering logs for container status ...
	I1129 09:11:46.762143  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:11:46.792818  218317 logs.go:123] Gathering logs for kubelet ...
	I1129 09:11:46.792857  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:11:46.884654  218317 logs.go:123] Gathering logs for dmesg ...
	I1129 09:11:46.884697  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:11:46.907421  218317 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:11:46.907523  218317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:11:46.987496  218317 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	
	
	==> CRI-O <==
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.414916307Z" level=info msg="RDT not available in the host system"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.414930456Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.415716309Z" level=info msg="Conmon does support the --sync option"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.415734446Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.415753406Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.416460046Z" level=info msg="Conmon does support the --sync option"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.416476237Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.420299293Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.420329387Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.420926775Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.421386521Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.421446577Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513021051Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-zwrqh Namespace:kube-system ID:dbb953d5d78dfcbb4e8f84b62bc77e052694698c676219eb37c5d3fc5ffb01ab UID:22030a4d-3d80-4ff5-b4ff-245caa1db156 NetNS:/var/run/netns/80db6f7a-f620-4e62-8ad2-687a6af1ae53 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005aa370}] Aliases:map[]}"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513211767Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-zwrqh for CNI network kindnet (type=ptp)"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513659283Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513685017Z" level=info msg="Starting seccomp notifier watcher"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513743154Z" level=info msg="Create NRI interface"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.51387465Z" level=info msg="built-in NRI default validator is disabled"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513891852Z" level=info msg="runtime interface created"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513902817Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513908231Z" level=info msg="runtime interface starting up..."
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513912971Z" level=info msg="starting plugins..."
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.513924557Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 29 09:11:39 pause-295501 crio[2158]: time="2025-11-29T09:11:39.514297807Z" level=info msg="No systemd watchdog enabled"
	Nov 29 09:11:39 pause-295501 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a8bbbdfd51dab       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   dbb953d5d78df       coredns-66bc5c9577-zwrqh               kube-system
	f78ddfecb898a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   3dce700df6f43       kindnet-st2fs                          kube-system
	092bad9397a64       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   25 seconds ago      Running             kube-proxy                0                   306f18b9bacfd       kube-proxy-f4kr8                       kube-system
	a2578393c04ae       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   35 seconds ago      Running             kube-scheduler            0                   ae847dd9e384d       kube-scheduler-pause-295501            kube-system
	a1cfbd7b390ca       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   35 seconds ago      Running             kube-apiserver            0                   985a97e3945e5       kube-apiserver-pause-295501            kube-system
	45e32d34386d7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   35 seconds ago      Running             etcd                      0                   a3aaf265697d7       etcd-pause-295501                      kube-system
	249d465154ffd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   35 seconds ago      Running             kube-controller-manager   0                   9efa9ed9d0067       kube-controller-manager-pause-295501   kube-system
	
	
	==> coredns [a8bbbdfd51dab82a8d563b57b00d7ca2194f16c69e6c25030b49a452fa24c721] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39627 - 35765 "HINFO IN 3165023682602823029.1093676017627200724. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042657653s
	
	
	==> describe nodes <==
	Name:               pause-295501
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-295501
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=pause-295501
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_11_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:11:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-295501
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:11:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:11:33 +0000   Sat, 29 Nov 2025 09:11:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:11:33 +0000   Sat, 29 Nov 2025 09:11:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:11:33 +0000   Sat, 29 Nov 2025 09:11:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:11:33 +0000   Sat, 29 Nov 2025 09:11:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-295501
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                1fe31677-beb1-4298-8b5b-7e258707552a
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-zwrqh                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-295501                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-st2fs                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-295501             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-295501    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-f4kr8                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-295501             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node pause-295501 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node pause-295501 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node pause-295501 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node pause-295501 event: Registered Node pause-295501 in Controller
	  Normal  NodeReady                14s   kubelet          Node pause-295501 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.088968] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.025527] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.969002] kauditd_printk_skb: 47 callbacks suppressed
	[Nov29 08:30] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.030577] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023853] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023871] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +1.023925] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +2.047756] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[  +4.031543] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[Nov29 08:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[ +16.382281] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	[ +32.252561] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 7a dd b3 26 76 7c 66 ad 58 41 71 c1 08 00
	
	
	==> etcd [45e32d34386d74e83368055caa5eb9f063ed6013f8c4cd7c8a1fbf290b1d66ef] <==
	{"level":"warn","ts":"2025-11-29T09:11:13.460918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.470871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.478738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.485939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.493236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.500679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.508773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.515196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.522906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.533059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.540479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.547470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.554453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.561695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.571058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.579927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.587019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.593776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.600732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.608537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.614773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.640038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.646894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.653594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:11:13.707570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49688","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:11:47 up 54 min,  0 user,  load average: 1.54, 2.15, 1.51
	Linux pause-295501 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f78ddfecb898a10860947cd0138c0c91e432eb8133b9c1199e2378046faeefb6] <==
	I1129 09:11:22.862107       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:11:22.862431       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 09:11:22.862607       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:11:22.862631       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:11:22.862657       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:11:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:11:23.062420       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:11:23.094780       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:11:23.094867       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:11:23.095052       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:11:23.395030       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:11:23.395071       1 metrics.go:72] Registering metrics
	I1129 09:11:23.395194       1 controller.go:711] "Syncing nftables rules"
	I1129 09:11:33.062980       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:11:33.063045       1 main.go:301] handling current node
	I1129 09:11:43.067453       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:11:43.067503       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a1cfbd7b390ca764419e13459208f92ccab83b9f560dab98155c947b575c9eb7] <==
	I1129 09:11:14.224778       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1129 09:11:14.224816       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1129 09:11:14.228615       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:11:14.228640       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:11:14.232248       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:11:14.232500       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:11:14.236010       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:11:14.237701       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1129 09:11:15.099415       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:11:15.103132       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:11:15.103161       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:11:15.585061       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:11:15.625995       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:11:15.701551       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:11:15.707877       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1129 09:11:15.708946       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:11:15.713320       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:11:16.110453       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:11:16.733752       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:11:16.743886       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:11:16.752898       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:11:21.766268       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:11:21.770637       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:11:22.062269       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:11:22.213294       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [249d465154ffd1c8223cffce99d25f392334b77035bb4fd7a68ea732b0d1ffaa] <==
	I1129 09:11:21.102198       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:11:21.102217       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 09:11:21.105710       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:11:21.108124       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:11:21.108150       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:11:21.108161       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 09:11:21.108662       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 09:11:21.109718       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 09:11:21.109746       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 09:11:21.109770       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:11:21.109881       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:11:21.109985       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:11:21.110006       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:11:21.110047       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 09:11:21.110073       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:11:21.110084       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:11:21.110589       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 09:11:21.110640       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:11:21.110929       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 09:11:21.112527       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:11:21.113362       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 09:11:21.118965       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:11:21.132197       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:11:21.143557       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 09:11:36.062348       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [092bad9397a64f5518143503ccfeb6661abcd1fb66cf16f31703c648078497fe] <==
	I1129 09:11:22.642815       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:11:22.698111       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:11:22.798759       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:11:22.798803       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1129 09:11:22.798980       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:11:22.818279       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:11:22.818351       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:11:22.824075       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:11:22.824489       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:11:22.824516       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:11:22.826980       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:11:22.827063       1 config.go:200] "Starting service config controller"
	I1129 09:11:22.827087       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:11:22.827073       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:11:22.827166       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:11:22.827187       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:11:22.827261       1 config.go:309] "Starting node config controller"
	I1129 09:11:22.827296       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:11:22.827303       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:11:22.927684       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:11:22.927728       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:11:22.927750       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a2578393c04aeba954d20c8ac71275220f67953d2fd43b6e378182a2c47660e2] <==
	I1129 09:11:14.767752       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:11:14.769540       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:11:14.769584       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:11:14.769804       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:11:14.769869       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1129 09:11:14.772677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1129 09:11:14.772776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:11:14.773164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:11:14.773191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:11:14.773225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:11:14.773286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:11:14.773390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:11:14.773393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:11:14.773817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:11:14.774242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:11:14.774289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:11:14.774390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:11:14.774437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:11:14.774561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:11:14.774715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:11:14.774975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:11:14.774974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:11:14.775086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:11:14.775194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1129 09:11:16.070110       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:11:17 pause-295501 kubelet[1297]: E1129 09:11:17.625056    1297 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-295501\" already exists" pod="kube-system/kube-controller-manager-pause-295501"
	Nov 29 09:11:17 pause-295501 kubelet[1297]: I1129 09:11:17.637962    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-295501" podStartSLOduration=1.6379409329999999 podStartE2EDuration="1.637940933s" podCreationTimestamp="2025-11-29 09:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:11:17.636516565 +0000 UTC m=+1.133499447" watchObservedRunningTime="2025-11-29 09:11:17.637940933 +0000 UTC m=+1.134923811"
	Nov 29 09:11:17 pause-295501 kubelet[1297]: I1129 09:11:17.661325    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-295501" podStartSLOduration=1.661301759 podStartE2EDuration="1.661301759s" podCreationTimestamp="2025-11-29 09:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:11:17.648189449 +0000 UTC m=+1.145172332" watchObservedRunningTime="2025-11-29 09:11:17.661301759 +0000 UTC m=+1.158284642"
	Nov 29 09:11:17 pause-295501 kubelet[1297]: I1129 09:11:17.671522    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-295501" podStartSLOduration=1.6715015050000002 podStartE2EDuration="1.671501505s" podCreationTimestamp="2025-11-29 09:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:11:17.661299922 +0000 UTC m=+1.158282805" watchObservedRunningTime="2025-11-29 09:11:17.671501505 +0000 UTC m=+1.168484389"
	Nov 29 09:11:17 pause-295501 kubelet[1297]: I1129 09:11:17.682126    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-295501" podStartSLOduration=1.682101937 podStartE2EDuration="1.682101937s" podCreationTimestamp="2025-11-29 09:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:11:17.671666677 +0000 UTC m=+1.168649559" watchObservedRunningTime="2025-11-29 09:11:17.682101937 +0000 UTC m=+1.179084814"
	Nov 29 09:11:21 pause-295501 kubelet[1297]: I1129 09:11:21.125441    1297 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:11:21 pause-295501 kubelet[1297]: I1129 09:11:21.126253    1297 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.322684    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a6934cc7-9fdf-4551-bfd6-b001f95eb4f2-cni-cfg\") pod \"kindnet-st2fs\" (UID: \"a6934cc7-9fdf-4551-bfd6-b001f95eb4f2\") " pod="kube-system/kindnet-st2fs"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.322766    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n747c\" (UniqueName: \"kubernetes.io/projected/049b4663-ac1a-4dfc-9ab7-7060baa838e6-kube-api-access-n747c\") pod \"kube-proxy-f4kr8\" (UID: \"049b4663-ac1a-4dfc-9ab7-7060baa838e6\") " pod="kube-system/kube-proxy-f4kr8"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.322793    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6934cc7-9fdf-4551-bfd6-b001f95eb4f2-lib-modules\") pod \"kindnet-st2fs\" (UID: \"a6934cc7-9fdf-4551-bfd6-b001f95eb4f2\") " pod="kube-system/kindnet-st2fs"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.322820    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z49g8\" (UniqueName: \"kubernetes.io/projected/a6934cc7-9fdf-4551-bfd6-b001f95eb4f2-kube-api-access-z49g8\") pod \"kindnet-st2fs\" (UID: \"a6934cc7-9fdf-4551-bfd6-b001f95eb4f2\") " pod="kube-system/kindnet-st2fs"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.322874    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/049b4663-ac1a-4dfc-9ab7-7060baa838e6-kube-proxy\") pod \"kube-proxy-f4kr8\" (UID: \"049b4663-ac1a-4dfc-9ab7-7060baa838e6\") " pod="kube-system/kube-proxy-f4kr8"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.322898    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/049b4663-ac1a-4dfc-9ab7-7060baa838e6-xtables-lock\") pod \"kube-proxy-f4kr8\" (UID: \"049b4663-ac1a-4dfc-9ab7-7060baa838e6\") " pod="kube-system/kube-proxy-f4kr8"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.322969    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/049b4663-ac1a-4dfc-9ab7-7060baa838e6-lib-modules\") pod \"kube-proxy-f4kr8\" (UID: \"049b4663-ac1a-4dfc-9ab7-7060baa838e6\") " pod="kube-system/kube-proxy-f4kr8"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.323013    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6934cc7-9fdf-4551-bfd6-b001f95eb4f2-xtables-lock\") pod \"kindnet-st2fs\" (UID: \"a6934cc7-9fdf-4551-bfd6-b001f95eb4f2\") " pod="kube-system/kindnet-st2fs"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.641147    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f4kr8" podStartSLOduration=0.641125525 podStartE2EDuration="641.125525ms" podCreationTimestamp="2025-11-29 09:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:11:22.640554293 +0000 UTC m=+6.137537177" watchObservedRunningTime="2025-11-29 09:11:22.641125525 +0000 UTC m=+6.138108409"
	Nov 29 09:11:22 pause-295501 kubelet[1297]: I1129 09:11:22.653296    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-st2fs" podStartSLOduration=0.653274475 podStartE2EDuration="653.274475ms" podCreationTimestamp="2025-11-29 09:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:11:22.652816621 +0000 UTC m=+6.149799506" watchObservedRunningTime="2025-11-29 09:11:22.653274475 +0000 UTC m=+6.150257358"
	Nov 29 09:11:33 pause-295501 kubelet[1297]: I1129 09:11:33.488492    1297 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:11:33 pause-295501 kubelet[1297]: I1129 09:11:33.607819    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22030a4d-3d80-4ff5-b4ff-245caa1db156-config-volume\") pod \"coredns-66bc5c9577-zwrqh\" (UID: \"22030a4d-3d80-4ff5-b4ff-245caa1db156\") " pod="kube-system/coredns-66bc5c9577-zwrqh"
	Nov 29 09:11:33 pause-295501 kubelet[1297]: I1129 09:11:33.607908    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqzbb\" (UniqueName: \"kubernetes.io/projected/22030a4d-3d80-4ff5-b4ff-245caa1db156-kube-api-access-mqzbb\") pod \"coredns-66bc5c9577-zwrqh\" (UID: \"22030a4d-3d80-4ff5-b4ff-245caa1db156\") " pod="kube-system/coredns-66bc5c9577-zwrqh"
	Nov 29 09:11:34 pause-295501 kubelet[1297]: I1129 09:11:34.670210    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zwrqh" podStartSLOduration=12.670178731 podStartE2EDuration="12.670178731s" podCreationTimestamp="2025-11-29 09:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:11:34.669909943 +0000 UTC m=+18.166892846" watchObservedRunningTime="2025-11-29 09:11:34.670178731 +0000 UTC m=+18.167161610"
	Nov 29 09:11:43 pause-295501 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 09:11:43 pause-295501 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 09:11:43 pause-295501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 29 09:11:43 pause-295501 systemd[1]: kubelet.service: Consumed 1.205s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-295501 -n pause-295501
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-295501 -n pause-295501: exit status 2 (334.059117ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-295501 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-680646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-680646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (286.499616ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:16:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-680646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-680646 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-680646 describe deploy/metrics-server -n kube-system: exit status 1 (65.049345ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-680646 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-680646
helpers_test.go:243: (dbg) docker inspect old-k8s-version-680646:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8",
	        "Created": "2025-11-29T09:15:05.20238369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 302548,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:15:05.247351334Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8/hostname",
	        "HostsPath": "/var/lib/docker/containers/09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8/hosts",
	        "LogPath": "/var/lib/docker/containers/09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8/09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8-json.log",
	        "Name": "/old-k8s-version-680646",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-680646:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-680646",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8",
	                "LowerDir": "/var/lib/docker/overlay2/968ca6ee81356bbcecebb99911f7a3b0a6f59a701eda8a25aa396e0371a519e5-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/968ca6ee81356bbcecebb99911f7a3b0a6f59a701eda8a25aa396e0371a519e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/968ca6ee81356bbcecebb99911f7a3b0a6f59a701eda8a25aa396e0371a519e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/968ca6ee81356bbcecebb99911f7a3b0a6f59a701eda8a25aa396e0371a519e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-680646",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-680646/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-680646",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-680646",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-680646",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "85ca50aff75283f5c3789b65fb802123172e4113b3b02369948bc48ba61fa97e",
	            "SandboxKey": "/var/run/docker/netns/85ca50aff752",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-680646": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a43c754cd40971db489179630ca1055c6922bb09bc13c0b7b4d8e4460b07cb9b",
	                    "EndpointID": "cb5aa22a70b9df78088eca9a878196ec8af98c68f4e4e489a53a97900a58c848",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "a2:51:34:58:2c:27",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-680646",
	                        "09f4f79f42ba"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680646 -n old-k8s-version-680646
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-680646 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-680646 logs -n 25: (1.147015301s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-628644 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p flannel-628644 sudo crio config                                                                                                                                       │ flannel-628644               │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p flannel-628644                                                                                                                                                        │ flannel-628644               │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p bridge-628644 sudo docker system info                                                                                                                                 │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p bridge-628644 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p bridge-628644 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p bridge-628644 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cri-dockerd --version                                                                                                                              │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p bridge-628644 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p bridge-628644 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo containerd config dump                                                                                                                             │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo crio config                                                                                                                                        │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p bridge-628644                                                                                                                                                         │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p disable-driver-mounts-327778                                                                                                                                          │ disable-driver-mounts-327778 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-680646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:15:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:15:56.476960  322024 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:15:56.477236  322024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:15:56.477246  322024 out.go:374] Setting ErrFile to fd 2...
	I1129 09:15:56.477250  322024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:15:56.477471  322024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:15:56.478006  322024 out.go:368] Setting JSON to false
	I1129 09:15:56.479372  322024 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3508,"bootTime":1764404248,"procs":382,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:15:56.479446  322024 start.go:143] virtualization: kvm guest
	I1129 09:15:56.481300  322024 out.go:179] * [default-k8s-diff-port-632243] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:15:56.483331  322024 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:15:56.483407  322024 notify.go:221] Checking for updates...
	I1129 09:15:56.486162  322024 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:15:56.487499  322024 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:15:56.489982  322024 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:15:56.492265  322024 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:15:56.493435  322024 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:15:56.495096  322024 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:15:56.495208  322024 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:15:56.495293  322024 config.go:182] Loaded profile config "old-k8s-version-680646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1129 09:15:56.495409  322024 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:15:56.523766  322024 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:15:56.523865  322024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:15:56.587194  322024 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-29 09:15:56.577155917 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:15:56.587317  322024 docker.go:319] overlay module found
	I1129 09:15:56.589013  322024 out.go:179] * Using the docker driver based on user configuration
	I1129 09:15:56.590014  322024 start.go:309] selected driver: docker
	I1129 09:15:56.590029  322024 start.go:927] validating driver "docker" against <nil>
	I1129 09:15:56.590041  322024 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:15:56.590644  322024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:15:56.651106  322024 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-29 09:15:56.64061463 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:15:56.651283  322024 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:15:56.651529  322024 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:15:56.653420  322024 out.go:179] * Using Docker driver with root privileges
	I1129 09:15:56.654790  322024 cni.go:84] Creating CNI manager for ""
	I1129 09:15:56.654876  322024 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:15:56.654890  322024 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:15:56.654966  322024 start.go:353] cluster config:
	{Name:default-k8s-diff-port-632243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:15:56.656309  322024 out.go:179] * Starting "default-k8s-diff-port-632243" primary control-plane node in "default-k8s-diff-port-632243" cluster
	I1129 09:15:56.657461  322024 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:15:56.658715  322024 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:15:56.659919  322024 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:15:56.659981  322024 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:15:56.659993  322024 cache.go:65] Caching tarball of preloaded images
	I1129 09:15:56.660035  322024 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:15:56.660088  322024 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:15:56.660099  322024 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:15:56.660207  322024 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/config.json ...
	I1129 09:15:56.660237  322024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/config.json: {Name:mkfda8f89e875f76dcf06e6cee2e601a1e0a1e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:15:56.682094  322024 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:15:56.682117  322024 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:15:56.682135  322024 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:15:56.682180  322024 start.go:360] acquireMachinesLock for default-k8s-diff-port-632243: {Name:mk4d57d40865f49c5625093aed79ed0eb9003360 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:15:56.682310  322024 start.go:364] duration metric: took 105.691µs to acquireMachinesLock for "default-k8s-diff-port-632243"
	I1129 09:15:56.682341  322024 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-632243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:15:56.682435  322024 start.go:125] createHost starting for "" (driver="docker")
	W1129 09:15:53.101392  305400 node_ready.go:57] node "no-preload-897274" has "Ready":"False" status (will retry)
	W1129 09:15:55.561149  305400 node_ready.go:57] node "no-preload-897274" has "Ready":"False" status (will retry)
	I1129 09:15:55.194682  318819 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-160987:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.554894399s)
	I1129 09:15:55.194716  318819 kic.go:203] duration metric: took 5.555053357s to extract preloaded images to volume ...
	W1129 09:15:55.194805  318819 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:15:55.195092  318819 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:15:55.195175  318819 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:15:55.272673  318819 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-160987 --name embed-certs-160987 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-160987 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-160987 --network embed-certs-160987 --ip 192.168.85.2 --volume embed-certs-160987:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:15:55.648087  318819 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Running}}
	I1129 09:15:55.668943  318819 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:15:55.689310  318819 cli_runner.go:164] Run: docker exec embed-certs-160987 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:15:55.745246  318819 oci.go:144] the created container "embed-certs-160987" has a running status.
	I1129 09:15:55.745285  318819 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa...
	I1129 09:15:55.823128  318819 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:15:55.989201  318819 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:15:56.014465  318819 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:15:56.014490  318819 kic_runner.go:114] Args: [docker exec --privileged embed-certs-160987 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:15:56.069162  318819 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:15:56.098535  318819 machine.go:94] provisionDockerMachine start ...
	I1129 09:15:56.098615  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:56.121969  318819 main.go:143] libmachine: Using SSH client type: native
	I1129 09:15:56.122350  318819 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1129 09:15:56.122382  318819 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:15:56.277456  318819 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-160987
	
	I1129 09:15:56.277489  318819 ubuntu.go:182] provisioning hostname "embed-certs-160987"
	I1129 09:15:56.277551  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:56.299514  318819 main.go:143] libmachine: Using SSH client type: native
	I1129 09:15:56.299817  318819 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1129 09:15:56.299867  318819 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-160987 && echo "embed-certs-160987" | sudo tee /etc/hostname
	I1129 09:15:56.466446  318819 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-160987
	
	I1129 09:15:56.466547  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:56.489584  318819 main.go:143] libmachine: Using SSH client type: native
	I1129 09:15:56.489800  318819 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1129 09:15:56.489820  318819 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-160987' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-160987/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-160987' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:15:56.646503  318819 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:15:56.646544  318819 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:15:56.646594  318819 ubuntu.go:190] setting up certificates
	I1129 09:15:56.646608  318819 provision.go:84] configureAuth start
	I1129 09:15:56.646680  318819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-160987
	I1129 09:15:56.667150  318819 provision.go:143] copyHostCerts
	I1129 09:15:56.667209  318819 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:15:56.667217  318819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:15:56.667285  318819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:15:56.667418  318819 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:15:56.667430  318819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:15:56.667459  318819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:15:56.667521  318819 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:15:56.667529  318819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:15:56.667551  318819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:15:56.667602  318819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.embed-certs-160987 san=[127.0.0.1 192.168.85.2 embed-certs-160987 localhost minikube]
	I1129 09:15:56.700618  318819 provision.go:177] copyRemoteCerts
	I1129 09:15:56.700690  318819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:15:56.700743  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:56.720980  318819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:15:56.829506  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:15:56.852079  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:15:56.873181  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:15:56.894186  318819 provision.go:87] duration metric: took 247.56289ms to configureAuth
	I1129 09:15:56.894221  318819 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:15:56.894415  318819 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:15:56.894526  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:56.914708  318819 main.go:143] libmachine: Using SSH client type: native
	I1129 09:15:56.915066  318819 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1129 09:15:56.915096  318819 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:15:57.227102  318819 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:15:57.227135  318819 machine.go:97] duration metric: took 1.128580223s to provisionDockerMachine
	I1129 09:15:57.227146  318819 client.go:176] duration metric: took 8.234271965s to LocalClient.Create
	I1129 09:15:57.227171  318819 start.go:167] duration metric: took 8.234349965s to libmachine.API.Create "embed-certs-160987"
	I1129 09:15:57.227181  318819 start.go:293] postStartSetup for "embed-certs-160987" (driver="docker")
	I1129 09:15:57.227194  318819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:15:57.227338  318819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:15:57.227403  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:57.249717  318819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:15:57.359244  318819 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:15:57.363325  318819 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:15:57.363356  318819 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:15:57.363371  318819 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:15:57.363439  318819 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:15:57.363541  318819 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:15:57.363659  318819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:15:57.372406  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:15:57.399224  318819 start.go:296] duration metric: took 172.026651ms for postStartSetup
	I1129 09:15:57.399672  318819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-160987
	I1129 09:15:57.423957  318819 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/config.json ...
	I1129 09:15:57.424335  318819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:15:57.424435  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:57.446282  318819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:15:57.552231  318819 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:15:57.557454  318819 start.go:128] duration metric: took 8.567604388s to createHost
	I1129 09:15:57.557483  318819 start.go:83] releasing machines lock for "embed-certs-160987", held for 8.567754124s
	I1129 09:15:57.557556  318819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-160987
	I1129 09:15:57.579063  318819 ssh_runner.go:195] Run: cat /version.json
	I1129 09:15:57.579130  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:57.579157  318819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:15:57.579225  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:57.601771  318819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:15:57.602769  318819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:15:57.762861  318819 ssh_runner.go:195] Run: systemctl --version
	I1129 09:15:57.770445  318819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:15:57.810250  318819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:15:57.815921  318819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:15:57.815996  318819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:15:57.848108  318819 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:15:57.848136  318819 start.go:496] detecting cgroup driver to use...
	I1129 09:15:57.848167  318819 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:15:57.848207  318819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:15:57.866385  318819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:15:57.881540  318819 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:15:57.881612  318819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:15:57.902095  318819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:15:57.922756  318819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:15:58.012347  318819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:15:58.109284  318819 docker.go:234] disabling docker service ...
	I1129 09:15:58.109372  318819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:15:58.130262  318819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:15:58.145550  318819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:15:58.257124  318819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:15:58.349859  318819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:15:58.363532  318819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:15:58.379539  318819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:15:58.379606  318819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:15:58.390767  318819 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:15:58.390822  318819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:15:58.400860  318819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:15:58.410683  318819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:15:58.420780  318819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:15:58.430402  318819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:15:58.441017  318819 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:15:58.457610  318819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:15:58.467787  318819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:15:58.476242  318819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:15:58.485152  318819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:15:58.577007  318819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:16:00.955707  318819 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.378663103s)
	I1129 09:16:00.955746  318819 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:16:00.955801  318819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:16:00.960557  318819 start.go:564] Will wait 60s for crictl version
	I1129 09:16:00.960627  318819 ssh_runner.go:195] Run: which crictl
	I1129 09:16:00.964975  318819 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:16:00.992553  318819 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:16:00.992628  318819 ssh_runner.go:195] Run: crio --version
	I1129 09:16:01.024304  318819 ssh_runner.go:195] Run: crio --version
	I1129 09:16:01.063751  318819 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:15:56.684365  322024 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:15:56.684592  322024 start.go:159] libmachine.API.Create for "default-k8s-diff-port-632243" (driver="docker")
	I1129 09:15:56.684631  322024 client.go:173] LocalClient.Create starting
	I1129 09:15:56.684710  322024 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem
	I1129 09:15:56.684748  322024 main.go:143] libmachine: Decoding PEM data...
	I1129 09:15:56.684767  322024 main.go:143] libmachine: Parsing certificate...
	I1129 09:15:56.684826  322024 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem
	I1129 09:15:56.684887  322024 main.go:143] libmachine: Decoding PEM data...
	I1129 09:15:56.684906  322024 main.go:143] libmachine: Parsing certificate...
	I1129 09:15:56.685266  322024 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-632243 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:15:56.703613  322024 cli_runner.go:211] docker network inspect default-k8s-diff-port-632243 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:15:56.703677  322024 network_create.go:284] running [docker network inspect default-k8s-diff-port-632243] to gather additional debugging logs...
	I1129 09:15:56.703696  322024 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-632243
	W1129 09:15:56.722767  322024 cli_runner.go:211] docker network inspect default-k8s-diff-port-632243 returned with exit code 1
	I1129 09:15:56.722799  322024 network_create.go:287] error running [docker network inspect default-k8s-diff-port-632243]: docker network inspect default-k8s-diff-port-632243: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-632243 not found
	I1129 09:15:56.722816  322024 network_create.go:289] output of [docker network inspect default-k8s-diff-port-632243]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-632243 not found
	
	** /stderr **
	I1129 09:15:56.722962  322024 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:15:56.742214  322024 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-94fc752bc7a7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ed:43:e0:ad:5a} reservation:<nil>}
	I1129 09:15:56.742920  322024 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4cfc302f5d5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:73:ac:ba:18:bb} reservation:<nil>}
	I1129 09:15:56.743959  322024 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-05a73bbe16b8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:a9:af:00:78:ac} reservation:<nil>}
	I1129 09:15:56.744668  322024 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a43c754cd409 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:00:3d:cd:12:c2} reservation:<nil>}
	I1129 09:15:56.745445  322024 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8f9ed915c5ff IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:69:2b:93:26:b4} reservation:<nil>}
	I1129 09:15:56.746026  322024 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-d8f02c8f2b11 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:5e:d7:2f:9e:57:74} reservation:<nil>}
	I1129 09:15:56.746905  322024 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00202ea00}
	I1129 09:15:56.746934  322024 network_create.go:124] attempt to create docker network default-k8s-diff-port-632243 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1129 09:15:56.746982  322024 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-632243 default-k8s-diff-port-632243
	I1129 09:15:56.799899  322024 network_create.go:108] docker network default-k8s-diff-port-632243 192.168.103.0/24 created
	I1129 09:15:56.799932  322024 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-632243" container
	I1129 09:15:56.799988  322024 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:15:56.819908  322024 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-632243 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-632243 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:15:56.842058  322024 oci.go:103] Successfully created a docker volume default-k8s-diff-port-632243
	I1129 09:15:56.842161  322024 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-632243-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-632243 --entrypoint /usr/bin/test -v default-k8s-diff-port-632243:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:15:57.257763  322024 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-632243
	I1129 09:15:57.257859  322024 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:15:57.257892  322024 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:15:57.257963  322024 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-632243:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1129 09:16:00.823605  322024 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-632243:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.565578273s)
	I1129 09:16:00.823643  322024 kic.go:203] duration metric: took 3.565763307s to extract preloaded images to volume ...
	W1129 09:16:00.823751  322024 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:16:00.823798  322024 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:16:00.823862  322024 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:16:00.891029  322024 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-632243 --name default-k8s-diff-port-632243 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-632243 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-632243 --network default-k8s-diff-port-632243 --ip 192.168.103.2 --volume default-k8s-diff-port-632243:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:16:01.196837  322024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Running}}
	I1129 09:16:01.218159  322024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:16:01.239027  322024 cli_runner.go:164] Run: docker exec default-k8s-diff-port-632243 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:16:01.291218  322024 oci.go:144] the created container "default-k8s-diff-port-632243" has a running status.
	I1129 09:16:01.291242  322024 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa...
	W1129 09:15:58.060509  305400 node_ready.go:57] node "no-preload-897274" has "Ready":"False" status (will retry)
	W1129 09:16:00.060567  305400 node_ready.go:57] node "no-preload-897274" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 29 09:15:50 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:50.363112536Z" level=info msg="Starting container: b9cfbde268060f8536e7424c2c92677fd9f400959b7daa640f3426147ba76ae3" id=960832c7-0b94-4c4a-bc90-471b20d74377 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:15:50 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:50.36705077Z" level=info msg="Started container" PID=2133 containerID=b9cfbde268060f8536e7424c2c92677fd9f400959b7daa640f3426147ba76ae3 description=kube-system/coredns-5dd5756b68-lwg8c/coredns id=960832c7-0b94-4c4a-bc90-471b20d74377 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d65d3aa595e4cca46ddf5fc5a15a3ddbf9d0a2d8bbc08223ad906a020639e908
	Nov 29 09:15:53 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:53.917303637Z" level=info msg="Running pod sandbox: default/busybox/POD" id=32a8ef8f-c5bf-4f07-be9f-489fca358509 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:15:53 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:53.917401537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:15:53 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:53.992348903Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6ef1acb1b9006b43212e588619da1c383dbedcfb0bad34ce4d48b28862a94eab UID:448319e8-daf0-4564-b243-93ff2f707e47 NetNS:/var/run/netns/29663e74-318a-4a86-9bf9-9cae1fe3a4ae Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000490388}] Aliases:map[]}"
	Nov 29 09:15:53 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:53.992396451Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 29 09:15:54 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:54.002302879Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6ef1acb1b9006b43212e588619da1c383dbedcfb0bad34ce4d48b28862a94eab UID:448319e8-daf0-4564-b243-93ff2f707e47 NetNS:/var/run/netns/29663e74-318a-4a86-9bf9-9cae1fe3a4ae Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000490388}] Aliases:map[]}"
	Nov 29 09:15:54 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:54.002505804Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 29 09:15:54 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:54.003393935Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 29 09:15:54 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:54.00433096Z" level=info msg="Ran pod sandbox 6ef1acb1b9006b43212e588619da1c383dbedcfb0bad34ce4d48b28862a94eab with infra container: default/busybox/POD" id=32a8ef8f-c5bf-4f07-be9f-489fca358509 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:15:54 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:54.005624029Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=849499e6-3c80-46f8-8dde-08ddf757c067 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:15:54 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:54.005746863Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=849499e6-3c80-46f8-8dde-08ddf757c067 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:15:54 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:54.005788968Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=849499e6-3c80-46f8-8dde-08ddf757c067 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:15:54 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:54.00643049Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b9bb9c8a-8398-4cc2-a8df-ab84daf38505 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:15:54 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:54.008059023Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:15:55 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:55.353587004Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=b9bb9c8a-8398-4cc2-a8df-ab84daf38505 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:15:55 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:55.354682488Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=30988e4b-8f23-4257-ab9d-aa26fc0f5172 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:15:55 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:55.356440651Z" level=info msg="Creating container: default/busybox/busybox" id=d4ba30f6-fcd0-45eb-a15d-034156481428 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:15:55 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:55.356593278Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:15:55 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:55.361467016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:15:55 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:55.362007903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:15:55 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:55.386653418Z" level=info msg="Created container c61fcb40a35e8a638c6e87bb15ef0a10542773da591a570e50d9e9f424d4d009: default/busybox/busybox" id=d4ba30f6-fcd0-45eb-a15d-034156481428 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:15:55 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:55.387377806Z" level=info msg="Starting container: c61fcb40a35e8a638c6e87bb15ef0a10542773da591a570e50d9e9f424d4d009" id=ffda6be4-7cae-4f8c-8f0e-e9b2a647b596 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:15:55 old-k8s-version-680646 crio[771]: time="2025-11-29T09:15:55.389334491Z" level=info msg="Started container" PID=2207 containerID=c61fcb40a35e8a638c6e87bb15ef0a10542773da591a570e50d9e9f424d4d009 description=default/busybox/busybox id=ffda6be4-7cae-4f8c-8f0e-e9b2a647b596 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6ef1acb1b9006b43212e588619da1c383dbedcfb0bad34ce4d48b28862a94eab
	Nov 29 09:16:01 old-k8s-version-680646 crio[771]: time="2025-11-29T09:16:01.685000207Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	c61fcb40a35e8       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   6ef1acb1b9006       busybox                                          default
	b9cfbde268060       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   d65d3aa595e4c       coredns-5dd5756b68-lwg8c                         kube-system
	b949980fdacf1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   a248cf2767cfb       storage-provisioner                              kube-system
	4bd3d62488bcd       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   d17c6ed153e6d       kindnet-xjmpm                                    kube-system
	46199c1b2422d       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      25 seconds ago      Running             kube-proxy                0                   a1222a3bb6dd5       kube-proxy-plgmf                                 kube-system
	2ab38f2f17551       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      43 seconds ago      Running             kube-apiserver            0                   848104c64893c       kube-apiserver-old-k8s-version-680646            kube-system
	8d2012db7413d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      43 seconds ago      Running             etcd                      0                   70f913ef02813       etcd-old-k8s-version-680646                      kube-system
	00213f18ed477       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      43 seconds ago      Running             kube-scheduler            0                   5f6fe5ba99805       kube-scheduler-old-k8s-version-680646            kube-system
	e50956c1a997d       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      43 seconds ago      Running             kube-controller-manager   0                   43dd5971865ca       kube-controller-manager-old-k8s-version-680646   kube-system
	
	
	==> coredns [b9cfbde268060f8536e7424c2c92677fd9f400959b7daa640f3426147ba76ae3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49581 - 32149 "HINFO IN 7580863547052949153.5462721497631251834. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.061502693s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-680646
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-680646
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=old-k8s-version-680646
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_15_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:15:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-680646
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:15:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:15:54 +0000   Sat, 29 Nov 2025 09:15:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:15:54 +0000   Sat, 29 Nov 2025 09:15:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:15:54 +0000   Sat, 29 Nov 2025 09:15:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:15:54 +0000   Sat, 29 Nov 2025 09:15:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-680646
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                3f6721fd-aca4-48a4-bf5d-00d6fd2bc52a
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-lwg8c                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-680646                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-xjmpm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-680646             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-680646    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-plgmf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-680646             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 45s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x9 over 45s)  kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node old-k8s-version-680646 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x7 over 45s)  kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientPID
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node old-k8s-version-680646 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node old-k8s-version-680646 event: Registered Node old-k8s-version-680646 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-680646 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 e1 87 e4 84 ae 08 06
	[  +0.002640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[ +14.778105] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +13.520989] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 32 aa eb 52 77 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +14.762906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df ae 93 da f6 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[Nov29 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[ +13.297626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	[  +7.390906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 68 f3 3d 3c e9 08 06
	[  +0.000436] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[  +7.145922] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea c3 2f 2a a3 01 08 06
	[  +0.000415] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	
	
	==> etcd [8d2012db7413d7f72afb6967c21226cd22dfcd0f1eb4a360d29cf68d3bc9273f] <==
	{"level":"info","ts":"2025-11-29T09:15:19.336666Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-29T09:15:19.336802Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-29T09:15:19.337498Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-29T09:15:19.337445Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-29T09:15:19.518911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-29T09:15:19.519032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-29T09:15:19.51908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-29T09:15:19.5191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-29T09:15:19.519178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-29T09:15:19.519212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-29T09:15:19.51924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-29T09:15:19.520647Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-680646 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-29T09:15:19.520782Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T09:15:19.520866Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:15:19.520993Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-29T09:15:19.521029Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-29T09:15:19.521079Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T09:15:19.522126Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:15:19.522248Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:15:19.522278Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:15:19.522487Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-29T09:15:19.522527Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-29T09:15:53.756446Z","caller":"traceutil/trace.go:171","msg":"trace[909786204] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"133.779349ms","start":"2025-11-29T09:15:53.622648Z","end":"2025-11-29T09:15:53.756428Z","steps":["trace[909786204] 'process raft request'  (duration: 133.636903ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:15:54.744504Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.464814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:2195"}
	{"level":"info","ts":"2025-11-29T09:15:54.744588Z","caller":"traceutil/trace.go:171","msg":"trace[1018515413] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:463; }","duration":"168.609803ms","start":"2025-11-29T09:15:54.575964Z","end":"2025-11-29T09:15:54.744574Z","steps":["trace[1018515413] 'range keys from in-memory index tree'  (duration: 168.347285ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:16:03 up 58 min,  0 user,  load average: 6.77, 4.16, 2.45
	Linux old-k8s-version-680646 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4bd3d62488bcdefa9184b2cae5aa9351838ba24d2ac98a25fa026dee48be5b17] <==
	I1129 09:15:39.438977       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:15:39.439414       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 09:15:39.439668       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:15:39.439689       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:15:39.439712       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:15:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:15:39.643883       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:15:39.737112       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:15:39.737200       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:15:39.737430       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:15:40.038302       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:15:40.038354       1 metrics.go:72] Registering metrics
	I1129 09:15:40.038434       1 controller.go:711] "Syncing nftables rules"
	I1129 09:15:49.647235       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:15:49.647301       1 main.go:301] handling current node
	I1129 09:15:59.646705       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:15:59.646748       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2ab38f2f175515ae04a5b04166016db2eceab1df4f93335fba66df7f8f3fccd8] <==
	I1129 09:15:21.132664       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1129 09:15:21.133168       1 controller.go:624] quota admission added evaluator for: namespaces
	I1129 09:15:21.133832       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1129 09:15:21.133879       1 aggregator.go:166] initial CRD sync complete...
	I1129 09:15:21.133887       1 autoregister_controller.go:141] Starting autoregister controller
	I1129 09:15:21.133894       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 09:15:21.133902       1 cache.go:39] Caches are synced for autoregister controller
	I1129 09:15:21.134040       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1129 09:15:21.136717       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1129 09:15:21.341707       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:15:22.036404       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:15:22.042863       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:15:22.042885       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:15:22.586860       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:15:22.625588       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:15:22.743874       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:15:22.749602       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1129 09:15:22.750705       1 controller.go:624] quota admission added evaluator for: endpoints
	I1129 09:15:22.757219       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:15:23.112310       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1129 09:15:24.240696       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1129 09:15:24.342976       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:15:24.355300       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1129 09:15:36.572993       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1129 09:15:36.623658       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e50956c1a997dd2c913caafadb60007803a797ad04a4e9c2cfcd6c61ea0731d1] <==
	I1129 09:15:36.181266       1 shared_informer.go:318] Caches are synced for resource quota
	I1129 09:15:36.498304       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 09:15:36.519406       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 09:15:36.519444       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1129 09:15:36.586228       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xjmpm"
	I1129 09:15:36.588659       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-plgmf"
	I1129 09:15:36.627140       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1129 09:15:36.984179       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-r22bg"
	I1129 09:15:36.995875       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-lwg8c"
	I1129 09:15:37.009764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="382.552861ms"
	I1129 09:15:37.022639       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.812411ms"
	I1129 09:15:37.022811       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.035µs"
	I1129 09:15:37.247582       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1129 09:15:37.260737       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-r22bg"
	I1129 09:15:37.267164       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.990732ms"
	I1129 09:15:37.276249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.019258ms"
	I1129 09:15:37.276358       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.836µs"
	I1129 09:15:49.989210       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.815µs"
	I1129 09:15:50.006393       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.055µs"
	I1129 09:15:50.484254       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.989µs"
	I1129 09:15:50.946614       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-lwg8c" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-lwg8c"
	I1129 09:15:50.946652       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I1129 09:15:50.946981       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1129 09:15:51.444147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.723177ms"
	I1129 09:15:51.444296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.216µs"
	
	
	==> kube-proxy [46199c1b2422d3589daaa945e88fe87c9312819a5be930f48662d318424b05a9] <==
	I1129 09:15:37.611580       1 server_others.go:69] "Using iptables proxy"
	I1129 09:15:37.622169       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1129 09:15:37.642800       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:15:37.645176       1 server_others.go:152] "Using iptables Proxier"
	I1129 09:15:37.645214       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1129 09:15:37.645222       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1129 09:15:37.645247       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1129 09:15:37.645487       1 server.go:846] "Version info" version="v1.28.0"
	I1129 09:15:37.645501       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:15:37.646369       1 config.go:97] "Starting endpoint slice config controller"
	I1129 09:15:37.646399       1 config.go:188] "Starting service config controller"
	I1129 09:15:37.646407       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1129 09:15:37.646412       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1129 09:15:37.646588       1 config.go:315] "Starting node config controller"
	I1129 09:15:37.646612       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1129 09:15:37.747005       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1129 09:15:37.747034       1 shared_informer.go:318] Caches are synced for service config
	I1129 09:15:37.747012       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [00213f18ed477ac2ea0f639b021fbb2ec38e54847e750f1bdca187cc5c1429ad] <==
	W1129 09:15:21.125454       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1129 09:15:21.126145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1129 09:15:21.124697       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1129 09:15:21.126163       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1129 09:15:21.125603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1129 09:15:21.126180       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1129 09:15:21.125702       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1129 09:15:21.126201       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1129 09:15:21.125916       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1129 09:15:21.126217       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1129 09:15:21.125985       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1129 09:15:21.126233       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1129 09:15:21.982497       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1129 09:15:21.982530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1129 09:15:22.005487       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1129 09:15:22.005530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1129 09:15:22.102974       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1129 09:15:22.103019       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1129 09:15:22.323370       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1129 09:15:22.323443       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1129 09:15:22.341961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1129 09:15:22.341995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1129 09:15:22.434136       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1129 09:15:22.434172       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1129 09:15:22.720238       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 29 09:15:36 old-k8s-version-680646 kubelet[1398]: I1129 09:15:36.677539    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2911dadf-509a-47fb-80b1-7bad0dac803f-xtables-lock\") pod \"kube-proxy-plgmf\" (UID: \"2911dadf-509a-47fb-80b1-7bad0dac803f\") " pod="kube-system/kube-proxy-plgmf"
	Nov 29 09:15:36 old-k8s-version-680646 kubelet[1398]: I1129 09:15:36.677570    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2911dadf-509a-47fb-80b1-7bad0dac803f-lib-modules\") pod \"kube-proxy-plgmf\" (UID: \"2911dadf-509a-47fb-80b1-7bad0dac803f\") " pod="kube-system/kube-proxy-plgmf"
	Nov 29 09:15:36 old-k8s-version-680646 kubelet[1398]: I1129 09:15:36.677599    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c8108ed-0909-4754-ab0e-0d92a16cdeef-lib-modules\") pod \"kindnet-xjmpm\" (UID: \"4c8108ed-0909-4754-ab0e-0d92a16cdeef\") " pod="kube-system/kindnet-xjmpm"
	Nov 29 09:15:36 old-k8s-version-680646 kubelet[1398]: I1129 09:15:36.677626    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c8108ed-0909-4754-ab0e-0d92a16cdeef-xtables-lock\") pod \"kindnet-xjmpm\" (UID: \"4c8108ed-0909-4754-ab0e-0d92a16cdeef\") " pod="kube-system/kindnet-xjmpm"
	Nov 29 09:15:36 old-k8s-version-680646 kubelet[1398]: I1129 09:15:36.677656    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2911dadf-509a-47fb-80b1-7bad0dac803f-kube-proxy\") pod \"kube-proxy-plgmf\" (UID: \"2911dadf-509a-47fb-80b1-7bad0dac803f\") " pod="kube-system/kube-proxy-plgmf"
	Nov 29 09:15:36 old-k8s-version-680646 kubelet[1398]: I1129 09:15:36.677692    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwhn7\" (UniqueName: \"kubernetes.io/projected/2911dadf-509a-47fb-80b1-7bad0dac803f-kube-api-access-gwhn7\") pod \"kube-proxy-plgmf\" (UID: \"2911dadf-509a-47fb-80b1-7bad0dac803f\") " pod="kube-system/kube-proxy-plgmf"
	Nov 29 09:15:36 old-k8s-version-680646 kubelet[1398]: E1129 09:15:36.786692    1398 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 29 09:15:36 old-k8s-version-680646 kubelet[1398]: E1129 09:15:36.786736    1398 projected.go:198] Error preparing data for projected volume kube-api-access-tzd2m for pod kube-system/kindnet-xjmpm: configmap "kube-root-ca.crt" not found
	Nov 29 09:15:36 old-k8s-version-680646 kubelet[1398]: E1129 09:15:36.786827    1398 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4c8108ed-0909-4754-ab0e-0d92a16cdeef-kube-api-access-tzd2m podName:4c8108ed-0909-4754-ab0e-0d92a16cdeef nodeName:}" failed. No retries permitted until 2025-11-29 09:15:37.286796992 +0000 UTC m=+13.073803830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tzd2m" (UniqueName: "kubernetes.io/projected/4c8108ed-0909-4754-ab0e-0d92a16cdeef-kube-api-access-tzd2m") pod "kindnet-xjmpm" (UID: "4c8108ed-0909-4754-ab0e-0d92a16cdeef") : configmap "kube-root-ca.crt" not found
	Nov 29 09:15:36 old-k8s-version-680646 kubelet[1398]: E1129 09:15:36.789780    1398 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 29 09:15:36 old-k8s-version-680646 kubelet[1398]: E1129 09:15:36.790098    1398 projected.go:198] Error preparing data for projected volume kube-api-access-gwhn7 for pod kube-system/kube-proxy-plgmf: configmap "kube-root-ca.crt" not found
	Nov 29 09:15:36 old-k8s-version-680646 kubelet[1398]: E1129 09:15:36.790210    1398 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2911dadf-509a-47fb-80b1-7bad0dac803f-kube-api-access-gwhn7 podName:2911dadf-509a-47fb-80b1-7bad0dac803f nodeName:}" failed. No retries permitted until 2025-11-29 09:15:37.290182338 +0000 UTC m=+13.077189183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwhn7" (UniqueName: "kubernetes.io/projected/2911dadf-509a-47fb-80b1-7bad0dac803f-kube-api-access-gwhn7") pod "kube-proxy-plgmf" (UID: "2911dadf-509a-47fb-80b1-7bad0dac803f") : configmap "kube-root-ca.crt" not found
	Nov 29 09:15:39 old-k8s-version-680646 kubelet[1398]: I1129 09:15:39.381657    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-plgmf" podStartSLOduration=3.381603061 podCreationTimestamp="2025-11-29 09:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:15:38.383560582 +0000 UTC m=+14.170567426" watchObservedRunningTime="2025-11-29 09:15:39.381603061 +0000 UTC m=+15.168609905"
	Nov 29 09:15:39 old-k8s-version-680646 kubelet[1398]: I1129 09:15:39.382031    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-xjmpm" podStartSLOduration=1.749923369 podCreationTimestamp="2025-11-29 09:15:36 +0000 UTC" firstStartedPulling="2025-11-29 09:15:37.505484148 +0000 UTC m=+13.292490988" lastFinishedPulling="2025-11-29 09:15:39.13755104 +0000 UTC m=+14.924557882" observedRunningTime="2025-11-29 09:15:39.381431018 +0000 UTC m=+15.168437863" watchObservedRunningTime="2025-11-29 09:15:39.381990263 +0000 UTC m=+15.168997109"
	Nov 29 09:15:49 old-k8s-version-680646 kubelet[1398]: I1129 09:15:49.958679    1398 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 29 09:15:49 old-k8s-version-680646 kubelet[1398]: I1129 09:15:49.988409    1398 topology_manager.go:215] "Topology Admit Handler" podUID="34b2ab35-01c8-443b-90eb-b685e98a561b" podNamespace="kube-system" podName="coredns-5dd5756b68-lwg8c"
	Nov 29 09:15:49 old-k8s-version-680646 kubelet[1398]: I1129 09:15:49.989718    1398 topology_manager.go:215] "Topology Admit Handler" podUID="11cb0c11-4af9-4cf6-945c-a6dcb390a105" podNamespace="kube-system" podName="storage-provisioner"
	Nov 29 09:15:50 old-k8s-version-680646 kubelet[1398]: I1129 09:15:50.069483    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34b2ab35-01c8-443b-90eb-b685e98a561b-config-volume\") pod \"coredns-5dd5756b68-lwg8c\" (UID: \"34b2ab35-01c8-443b-90eb-b685e98a561b\") " pod="kube-system/coredns-5dd5756b68-lwg8c"
	Nov 29 09:15:50 old-k8s-version-680646 kubelet[1398]: I1129 09:15:50.069553    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgl7n\" (UniqueName: \"kubernetes.io/projected/34b2ab35-01c8-443b-90eb-b685e98a561b-kube-api-access-wgl7n\") pod \"coredns-5dd5756b68-lwg8c\" (UID: \"34b2ab35-01c8-443b-90eb-b685e98a561b\") " pod="kube-system/coredns-5dd5756b68-lwg8c"
	Nov 29 09:15:50 old-k8s-version-680646 kubelet[1398]: I1129 09:15:50.069705    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/11cb0c11-4af9-4cf6-945c-a6dcb390a105-tmp\") pod \"storage-provisioner\" (UID: \"11cb0c11-4af9-4cf6-945c-a6dcb390a105\") " pod="kube-system/storage-provisioner"
	Nov 29 09:15:50 old-k8s-version-680646 kubelet[1398]: I1129 09:15:50.069769    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tln8c\" (UniqueName: \"kubernetes.io/projected/11cb0c11-4af9-4cf6-945c-a6dcb390a105-kube-api-access-tln8c\") pod \"storage-provisioner\" (UID: \"11cb0c11-4af9-4cf6-945c-a6dcb390a105\") " pod="kube-system/storage-provisioner"
	Nov 29 09:15:50 old-k8s-version-680646 kubelet[1398]: I1129 09:15:50.483453    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.483393288 podCreationTimestamp="2025-11-29 09:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:15:50.460833611 +0000 UTC m=+26.247840457" watchObservedRunningTime="2025-11-29 09:15:50.483393288 +0000 UTC m=+26.270400134"
	Nov 29 09:15:51 old-k8s-version-680646 kubelet[1398]: I1129 09:15:51.430926    1398 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lwg8c" podStartSLOduration=15.430869003 podCreationTimestamp="2025-11-29 09:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:15:50.488289086 +0000 UTC m=+26.275295929" watchObservedRunningTime="2025-11-29 09:15:51.430869003 +0000 UTC m=+27.217875840"
	Nov 29 09:15:53 old-k8s-version-680646 kubelet[1398]: I1129 09:15:53.614943    1398 topology_manager.go:215] "Topology Admit Handler" podUID="448319e8-daf0-4564-b243-93ff2f707e47" podNamespace="default" podName="busybox"
	Nov 29 09:15:53 old-k8s-version-680646 kubelet[1398]: I1129 09:15:53.694183    1398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfnf4\" (UniqueName: \"kubernetes.io/projected/448319e8-daf0-4564-b243-93ff2f707e47-kube-api-access-zfnf4\") pod \"busybox\" (UID: \"448319e8-daf0-4564-b243-93ff2f707e47\") " pod="default/busybox"
	
	
	==> storage-provisioner [b949980fdacf15a77da12cdf8d97040f74440b8cd9ae750c8ee69963b08549b0] <==
	I1129 09:15:50.377813       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:15:50.399784       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:15:50.399932       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1129 09:15:50.431016       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:15:50.431293       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-680646_50d18389-b3f3-44c6-bb77-f31af59c3262!
	I1129 09:15:50.432791       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"20fa7a7e-0f59-4be3-9c60-c5917e942d20", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-680646_50d18389-b3f3-44c6-bb77-f31af59c3262 became leader
	I1129 09:15:50.531702       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-680646_50d18389-b3f3-44c6-bb77-f31af59c3262!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-680646 -n old-k8s-version-680646
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-680646 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-897274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-897274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (283.905345ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:16:14Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-897274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-897274 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-897274 describe deploy/metrics-server -n kube-system: exit status 1 (58.822893ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-897274 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-897274
helpers_test.go:243: (dbg) docker inspect no-preload-897274:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635",
	        "Created": "2025-11-29T09:15:12.796321744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 306145,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:15:12.836855432Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635/hostname",
	        "HostsPath": "/var/lib/docker/containers/49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635/hosts",
	        "LogPath": "/var/lib/docker/containers/49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635/49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635-json.log",
	        "Name": "/no-preload-897274",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-897274:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-897274",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635",
	                "LowerDir": "/var/lib/docker/overlay2/2391079c7361fb7ef885c6e2d9f7292f958728db50719b04d13acb986145d951-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2391079c7361fb7ef885c6e2d9f7292f958728db50719b04d13acb986145d951/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2391079c7361fb7ef885c6e2d9f7292f958728db50719b04d13acb986145d951/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2391079c7361fb7ef885c6e2d9f7292f958728db50719b04d13acb986145d951/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-897274",
	                "Source": "/var/lib/docker/volumes/no-preload-897274/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-897274",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-897274",
	                "name.minikube.sigs.k8s.io": "no-preload-897274",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "15ba7a327824c667a3c1257a7085d9a70e11dbb219408437d8dcb89b569d4c53",
	            "SandboxKey": "/var/run/docker/netns/15ba7a327824",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-897274": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d8f02c8f2b116aa0973a6466bb52331af9f99e5ba95f8e3241688d808e61a07a",
	                    "EndpointID": "e06de93f2e6655edab95f7d8273de150cac506495c278ff1c53a47a4fbef8413",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "5a:a4:f0:b0:fa:7f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-897274",
	                        "49538363fc81"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-897274 -n no-preload-897274
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-897274 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-897274 logs -n 25: (1.145230283s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p flannel-628644                                                                                                                                                        │ flannel-628644               │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p bridge-628644 sudo docker system info                                                                                                                                 │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p bridge-628644 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p bridge-628644 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p bridge-628644 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cri-dockerd --version                                                                                                                              │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p bridge-628644 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p bridge-628644 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo containerd config dump                                                                                                                             │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo crio config                                                                                                                                        │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p bridge-628644                                                                                                                                                         │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p disable-driver-mounts-327778                                                                                                                                          │ disable-driver-mounts-327778 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-680646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p old-k8s-version-680646 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-897274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:15:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:15:56.476960  322024 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:15:56.477236  322024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:15:56.477246  322024 out.go:374] Setting ErrFile to fd 2...
	I1129 09:15:56.477250  322024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:15:56.477471  322024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:15:56.478006  322024 out.go:368] Setting JSON to false
	I1129 09:15:56.479372  322024 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3508,"bootTime":1764404248,"procs":382,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:15:56.479446  322024 start.go:143] virtualization: kvm guest
	I1129 09:15:56.481300  322024 out.go:179] * [default-k8s-diff-port-632243] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:15:56.483331  322024 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:15:56.483407  322024 notify.go:221] Checking for updates...
	I1129 09:15:56.486162  322024 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:15:56.487499  322024 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:15:56.489982  322024 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:15:56.492265  322024 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:15:56.493435  322024 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:15:56.495096  322024 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:15:56.495208  322024 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:15:56.495293  322024 config.go:182] Loaded profile config "old-k8s-version-680646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1129 09:15:56.495409  322024 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:15:56.523766  322024 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:15:56.523865  322024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:15:56.587194  322024 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-29 09:15:56.577155917 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:15:56.587317  322024 docker.go:319] overlay module found
	I1129 09:15:56.589013  322024 out.go:179] * Using the docker driver based on user configuration
	I1129 09:15:56.590014  322024 start.go:309] selected driver: docker
	I1129 09:15:56.590029  322024 start.go:927] validating driver "docker" against <nil>
	I1129 09:15:56.590041  322024 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:15:56.590644  322024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:15:56.651106  322024 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-29 09:15:56.64061463 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:15:56.651283  322024 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:15:56.651529  322024 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:15:56.653420  322024 out.go:179] * Using Docker driver with root privileges
	I1129 09:15:56.654790  322024 cni.go:84] Creating CNI manager for ""
	I1129 09:15:56.654876  322024 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:15:56.654890  322024 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:15:56.654966  322024 start.go:353] cluster config:
	{Name:default-k8s-diff-port-632243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:15:56.656309  322024 out.go:179] * Starting "default-k8s-diff-port-632243" primary control-plane node in "default-k8s-diff-port-632243" cluster
	I1129 09:15:56.657461  322024 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:15:56.658715  322024 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:15:56.659919  322024 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:15:56.659981  322024 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:15:56.659993  322024 cache.go:65] Caching tarball of preloaded images
	I1129 09:15:56.660035  322024 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:15:56.660088  322024 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:15:56.660099  322024 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:15:56.660207  322024 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/config.json ...
	I1129 09:15:56.660237  322024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/config.json: {Name:mkfda8f89e875f76dcf06e6cee2e601a1e0a1e02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:15:56.682094  322024 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:15:56.682117  322024 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:15:56.682135  322024 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:15:56.682180  322024 start.go:360] acquireMachinesLock for default-k8s-diff-port-632243: {Name:mk4d57d40865f49c5625093aed79ed0eb9003360 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:15:56.682310  322024 start.go:364] duration metric: took 105.691µs to acquireMachinesLock for "default-k8s-diff-port-632243"
	I1129 09:15:56.682341  322024 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-632243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:15:56.682435  322024 start.go:125] createHost starting for "" (driver="docker")
	W1129 09:15:53.101392  305400 node_ready.go:57] node "no-preload-897274" has "Ready":"False" status (will retry)
	W1129 09:15:55.561149  305400 node_ready.go:57] node "no-preload-897274" has "Ready":"False" status (will retry)
	I1129 09:15:55.194682  318819 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-160987:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.554894399s)
	I1129 09:15:55.194716  318819 kic.go:203] duration metric: took 5.555053357s to extract preloaded images to volume ...
	W1129 09:15:55.194805  318819 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:15:55.195092  318819 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:15:55.195175  318819 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:15:55.272673  318819 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-160987 --name embed-certs-160987 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-160987 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-160987 --network embed-certs-160987 --ip 192.168.85.2 --volume embed-certs-160987:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:15:55.648087  318819 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Running}}
	I1129 09:15:55.668943  318819 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:15:55.689310  318819 cli_runner.go:164] Run: docker exec embed-certs-160987 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:15:55.745246  318819 oci.go:144] the created container "embed-certs-160987" has a running status.
	I1129 09:15:55.745285  318819 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa...
	I1129 09:15:55.823128  318819 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:15:55.989201  318819 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:15:56.014465  318819 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:15:56.014490  318819 kic_runner.go:114] Args: [docker exec --privileged embed-certs-160987 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:15:56.069162  318819 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:15:56.098535  318819 machine.go:94] provisionDockerMachine start ...
	I1129 09:15:56.098615  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:56.121969  318819 main.go:143] libmachine: Using SSH client type: native
	I1129 09:15:56.122350  318819 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1129 09:15:56.122382  318819 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:15:56.277456  318819 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-160987
	
	I1129 09:15:56.277489  318819 ubuntu.go:182] provisioning hostname "embed-certs-160987"
	I1129 09:15:56.277551  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:56.299514  318819 main.go:143] libmachine: Using SSH client type: native
	I1129 09:15:56.299817  318819 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1129 09:15:56.299867  318819 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-160987 && echo "embed-certs-160987" | sudo tee /etc/hostname
	I1129 09:15:56.466446  318819 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-160987
	
	I1129 09:15:56.466547  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:56.489584  318819 main.go:143] libmachine: Using SSH client type: native
	I1129 09:15:56.489800  318819 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1129 09:15:56.489820  318819 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-160987' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-160987/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-160987' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:15:56.646503  318819 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:15:56.646544  318819 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:15:56.646594  318819 ubuntu.go:190] setting up certificates
	I1129 09:15:56.646608  318819 provision.go:84] configureAuth start
	I1129 09:15:56.646680  318819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-160987
	I1129 09:15:56.667150  318819 provision.go:143] copyHostCerts
	I1129 09:15:56.667209  318819 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:15:56.667217  318819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:15:56.667285  318819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:15:56.667418  318819 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:15:56.667430  318819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:15:56.667459  318819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:15:56.667521  318819 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:15:56.667529  318819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:15:56.667551  318819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:15:56.667602  318819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.embed-certs-160987 san=[127.0.0.1 192.168.85.2 embed-certs-160987 localhost minikube]
	I1129 09:15:56.700618  318819 provision.go:177] copyRemoteCerts
	I1129 09:15:56.700690  318819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:15:56.700743  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:56.720980  318819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:15:56.829506  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:15:56.852079  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:15:56.873181  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:15:56.894186  318819 provision.go:87] duration metric: took 247.56289ms to configureAuth
	I1129 09:15:56.894221  318819 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:15:56.894415  318819 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:15:56.894526  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:56.914708  318819 main.go:143] libmachine: Using SSH client type: native
	I1129 09:15:56.915066  318819 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33099 <nil> <nil>}
	I1129 09:15:56.915096  318819 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:15:57.227102  318819 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:15:57.227135  318819 machine.go:97] duration metric: took 1.128580223s to provisionDockerMachine
	I1129 09:15:57.227146  318819 client.go:176] duration metric: took 8.234271965s to LocalClient.Create
	I1129 09:15:57.227171  318819 start.go:167] duration metric: took 8.234349965s to libmachine.API.Create "embed-certs-160987"
	I1129 09:15:57.227181  318819 start.go:293] postStartSetup for "embed-certs-160987" (driver="docker")
	I1129 09:15:57.227194  318819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:15:57.227338  318819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:15:57.227403  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:57.249717  318819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:15:57.359244  318819 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:15:57.363325  318819 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:15:57.363356  318819 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:15:57.363371  318819 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:15:57.363439  318819 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:15:57.363541  318819 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:15:57.363659  318819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:15:57.372406  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:15:57.399224  318819 start.go:296] duration metric: took 172.026651ms for postStartSetup
	I1129 09:15:57.399672  318819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-160987
	I1129 09:15:57.423957  318819 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/config.json ...
	I1129 09:15:57.424335  318819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:15:57.424435  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:57.446282  318819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:15:57.552231  318819 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:15:57.557454  318819 start.go:128] duration metric: took 8.567604388s to createHost
	I1129 09:15:57.557483  318819 start.go:83] releasing machines lock for "embed-certs-160987", held for 8.567754124s
	I1129 09:15:57.557556  318819 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-160987
	I1129 09:15:57.579063  318819 ssh_runner.go:195] Run: cat /version.json
	I1129 09:15:57.579130  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:57.579157  318819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:15:57.579225  318819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:15:57.601771  318819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:15:57.602769  318819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:15:57.762861  318819 ssh_runner.go:195] Run: systemctl --version
	I1129 09:15:57.770445  318819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:15:57.810250  318819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:15:57.815921  318819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:15:57.815996  318819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:15:57.848108  318819 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:15:57.848136  318819 start.go:496] detecting cgroup driver to use...
	I1129 09:15:57.848167  318819 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:15:57.848207  318819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:15:57.866385  318819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:15:57.881540  318819 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:15:57.881612  318819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:15:57.902095  318819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:15:57.922756  318819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:15:58.012347  318819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:15:58.109284  318819 docker.go:234] disabling docker service ...
	I1129 09:15:58.109372  318819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:15:58.130262  318819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:15:58.145550  318819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:15:58.257124  318819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:15:58.349859  318819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:15:58.363532  318819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:15:58.379539  318819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:15:58.379606  318819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:15:58.390767  318819 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:15:58.390822  318819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:15:58.400860  318819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:15:58.410683  318819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:15:58.420780  318819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:15:58.430402  318819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:15:58.441017  318819 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:15:58.457610  318819 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:15:58.467787  318819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:15:58.476242  318819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:15:58.485152  318819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:15:58.577007  318819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:16:00.955707  318819 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.378663103s)
	I1129 09:16:00.955746  318819 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:16:00.955801  318819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:16:00.960557  318819 start.go:564] Will wait 60s for crictl version
	I1129 09:16:00.960627  318819 ssh_runner.go:195] Run: which crictl
	I1129 09:16:00.964975  318819 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:16:00.992553  318819 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:16:00.992628  318819 ssh_runner.go:195] Run: crio --version
	I1129 09:16:01.024304  318819 ssh_runner.go:195] Run: crio --version
	I1129 09:16:01.063751  318819 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:15:56.684365  322024 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:15:56.684592  322024 start.go:159] libmachine.API.Create for "default-k8s-diff-port-632243" (driver="docker")
	I1129 09:15:56.684631  322024 client.go:173] LocalClient.Create starting
	I1129 09:15:56.684710  322024 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem
	I1129 09:15:56.684748  322024 main.go:143] libmachine: Decoding PEM data...
	I1129 09:15:56.684767  322024 main.go:143] libmachine: Parsing certificate...
	I1129 09:15:56.684826  322024 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem
	I1129 09:15:56.684887  322024 main.go:143] libmachine: Decoding PEM data...
	I1129 09:15:56.684906  322024 main.go:143] libmachine: Parsing certificate...
	I1129 09:15:56.685266  322024 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-632243 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:15:56.703613  322024 cli_runner.go:211] docker network inspect default-k8s-diff-port-632243 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:15:56.703677  322024 network_create.go:284] running [docker network inspect default-k8s-diff-port-632243] to gather additional debugging logs...
	I1129 09:15:56.703696  322024 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-632243
	W1129 09:15:56.722767  322024 cli_runner.go:211] docker network inspect default-k8s-diff-port-632243 returned with exit code 1
	I1129 09:15:56.722799  322024 network_create.go:287] error running [docker network inspect default-k8s-diff-port-632243]: docker network inspect default-k8s-diff-port-632243: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-632243 not found
	I1129 09:15:56.722816  322024 network_create.go:289] output of [docker network inspect default-k8s-diff-port-632243]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-632243 not found
	
	** /stderr **
	I1129 09:15:56.722962  322024 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:15:56.742214  322024 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-94fc752bc7a7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ed:43:e0:ad:5a} reservation:<nil>}
	I1129 09:15:56.742920  322024 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4cfc302f5d5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:73:ac:ba:18:bb} reservation:<nil>}
	I1129 09:15:56.743959  322024 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-05a73bbe16b8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:a9:af:00:78:ac} reservation:<nil>}
	I1129 09:15:56.744668  322024 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a43c754cd409 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:00:3d:cd:12:c2} reservation:<nil>}
	I1129 09:15:56.745445  322024 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8f9ed915c5ff IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:c2:69:2b:93:26:b4} reservation:<nil>}
	I1129 09:15:56.746026  322024 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-d8f02c8f2b11 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:5e:d7:2f:9e:57:74} reservation:<nil>}
	I1129 09:15:56.746905  322024 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00202ea00}
	I1129 09:15:56.746934  322024 network_create.go:124] attempt to create docker network default-k8s-diff-port-632243 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1129 09:15:56.746982  322024 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-632243 default-k8s-diff-port-632243
	I1129 09:15:56.799899  322024 network_create.go:108] docker network default-k8s-diff-port-632243 192.168.103.0/24 created
	I1129 09:15:56.799932  322024 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-632243" container
	I1129 09:15:56.799988  322024 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:15:56.819908  322024 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-632243 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-632243 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:15:56.842058  322024 oci.go:103] Successfully created a docker volume default-k8s-diff-port-632243
	I1129 09:15:56.842161  322024 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-632243-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-632243 --entrypoint /usr/bin/test -v default-k8s-diff-port-632243:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:15:57.257763  322024 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-632243
	I1129 09:15:57.257859  322024 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:15:57.257892  322024 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:15:57.257963  322024 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-632243:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1129 09:16:00.823605  322024 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-632243:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.565578273s)
	I1129 09:16:00.823643  322024 kic.go:203] duration metric: took 3.565763307s to extract preloaded images to volume ...
	W1129 09:16:00.823751  322024 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:16:00.823798  322024 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:16:00.823862  322024 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:16:00.891029  322024 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-632243 --name default-k8s-diff-port-632243 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-632243 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-632243 --network default-k8s-diff-port-632243 --ip 192.168.103.2 --volume default-k8s-diff-port-632243:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:16:01.196837  322024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Running}}
	I1129 09:16:01.218159  322024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:16:01.239027  322024 cli_runner.go:164] Run: docker exec default-k8s-diff-port-632243 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:16:01.291218  322024 oci.go:144] the created container "default-k8s-diff-port-632243" has a running status.
	I1129 09:16:01.291242  322024 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa...
	W1129 09:15:58.060509  305400 node_ready.go:57] node "no-preload-897274" has "Ready":"False" status (will retry)
	W1129 09:16:00.060567  305400 node_ready.go:57] node "no-preload-897274" has "Ready":"False" status (will retry)
	I1129 09:16:01.065122  318819 cli_runner.go:164] Run: docker network inspect embed-certs-160987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:16:01.085042  318819 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 09:16:01.089376  318819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:16:01.101087  318819 kubeadm.go:884] updating cluster {Name:embed-certs-160987 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-160987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:16:01.101215  318819 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:16:01.101259  318819 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:16:01.139632  318819 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:16:01.139655  318819 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:16:01.139699  318819 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:16:01.168018  318819 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:16:01.168041  318819 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:16:01.168050  318819 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1129 09:16:01.168146  318819 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-160987 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-160987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:16:01.168225  318819 ssh_runner.go:195] Run: crio config
	I1129 09:16:01.218738  318819 cni.go:84] Creating CNI manager for ""
	I1129 09:16:01.218764  318819 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:16:01.218785  318819 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:16:01.218814  318819 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-160987 NodeName:embed-certs-160987 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:16:01.219021  318819 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-160987"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:16:01.219100  318819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:16:01.228740  318819 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:16:01.228817  318819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:16:01.238806  318819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1129 09:16:01.254905  318819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:16:01.275000  318819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1129 09:16:01.290543  318819 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:16:01.294677  318819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:16:01.306632  318819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:16:01.405647  318819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:16:01.425023  318819 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987 for IP: 192.168.85.2
	I1129 09:16:01.425046  318819 certs.go:195] generating shared ca certs ...
	I1129 09:16:01.425065  318819 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:01.425222  318819 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:16:01.425269  318819 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:16:01.425278  318819 certs.go:257] generating profile certs ...
	I1129 09:16:01.425361  318819 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/client.key
	I1129 09:16:01.425375  318819 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/client.crt with IP's: []
	I1129 09:16:01.542193  318819 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/client.crt ...
	I1129 09:16:01.542229  318819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/client.crt: {Name:mk297d5920737e3271240ab6b36c73fe6f7b74dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:01.542413  318819 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/client.key ...
	I1129 09:16:01.542423  318819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/client.key: {Name:mkc3aa61b0bc834c7167c52892f94d8b3b6e9823 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:01.542515  318819 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.key.f7c4ad31
	I1129 09:16:01.542531  318819 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.crt.f7c4ad31 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1129 09:16:01.702465  318819 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.crt.f7c4ad31 ...
	I1129 09:16:01.702499  318819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.crt.f7c4ad31: {Name:mkfa5873b61eef9255309c3547c496c7ac388b27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:01.702700  318819 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.key.f7c4ad31 ...
	I1129 09:16:01.702718  318819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.key.f7c4ad31: {Name:mk9857075243c64c74f2d7b771eeaf86b0569fa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:01.702824  318819 certs.go:382] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.crt.f7c4ad31 -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.crt
	I1129 09:16:01.702956  318819 certs.go:386] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.key.f7c4ad31 -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.key
	I1129 09:16:01.703022  318819 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/proxy-client.key
	I1129 09:16:01.703039  318819 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/proxy-client.crt with IP's: []
	I1129 09:16:01.804600  318819 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/proxy-client.crt ...
	I1129 09:16:01.804628  318819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/proxy-client.crt: {Name:mk8f3b512c37d96a35a171c45d3504aa808d8783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:01.804814  318819 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/proxy-client.key ...
	I1129 09:16:01.804830  318819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/proxy-client.key: {Name:mk3221c9867de1a1f805fd188da6c308d5873cd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:01.805109  318819 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:16:01.805161  318819 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:16:01.805197  318819 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:16:01.805236  318819 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:16:01.805273  318819 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:16:01.805310  318819 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:16:01.805371  318819 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:16:01.806161  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:16:01.829968  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:16:01.851160  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:16:01.873030  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:16:01.895000  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1129 09:16:01.918191  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:16:01.938501  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:16:01.961075  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:16:01.985406  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:16:02.007382  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:16:02.029592  318819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:16:02.049477  318819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:16:02.065267  318819 ssh_runner.go:195] Run: openssl version
	I1129 09:16:02.073011  318819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:16:02.083147  318819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:16:02.087899  318819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:16:02.087965  318819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:16:02.133524  318819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:16:02.143650  318819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:16:02.154443  318819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:16:02.158875  318819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:16:02.158945  318819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:16:02.204993  318819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:16:02.215622  318819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:16:02.225817  318819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:02.230515  318819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:02.230571  318819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:02.269204  318819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:16:02.279954  318819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:16:02.284446  318819 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:16:02.284510  318819 kubeadm.go:401] StartCluster: {Name:embed-certs-160987 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-160987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:16:02.284581  318819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:16:02.284633  318819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:16:02.315409  318819 cri.go:89] found id: ""
	I1129 09:16:02.315486  318819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:16:02.325393  318819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:16:02.337239  318819 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:16:02.337306  318819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:16:02.346622  318819 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:16:02.346648  318819 kubeadm.go:158] found existing configuration files:
	
	I1129 09:16:02.346711  318819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:16:02.356041  318819 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:16:02.356129  318819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:16:02.366204  318819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:16:02.379725  318819 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:16:02.379797  318819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:16:02.391513  318819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:16:02.401811  318819 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:16:02.401878  318819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:16:02.412187  318819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:16:02.421854  318819 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:16:02.421911  318819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:16:02.430989  318819 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:16:02.475338  318819 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:16:02.475443  318819 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:16:02.501062  318819 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:16:02.501176  318819 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:16:02.501229  318819 kubeadm.go:319] OS: Linux
	I1129 09:16:02.501288  318819 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:16:02.501344  318819 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:16:02.501457  318819 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:16:02.501535  318819 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:16:02.501606  318819 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:16:02.501685  318819 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:16:02.501759  318819 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:16:02.501819  318819 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:16:02.581107  318819 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:16:02.581256  318819 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:16:02.581424  318819 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:16:02.590377  318819 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:16:02.592962  318819 out.go:252]   - Generating certificates and keys ...
	I1129 09:16:02.593090  318819 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:16:02.593198  318819 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:16:02.977352  318819 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:16:03.300835  318819 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:16:03.407121  318819 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:16:03.556388  318819 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:16:01.574124  322024 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:16:01.613213  322024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:16:01.637134  322024 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:16:01.637162  322024 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-632243 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:16:01.693612  322024 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:16:01.717318  322024 machine.go:94] provisionDockerMachine start ...
	I1129 09:16:01.717436  322024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:16:01.740721  322024 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:01.741159  322024 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1129 09:16:01.741184  322024 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:16:01.896736  322024 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-632243
	
	I1129 09:16:01.896768  322024 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-632243"
	I1129 09:16:01.896834  322024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:16:01.917812  322024 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:01.918155  322024 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1129 09:16:01.918215  322024 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-632243 && echo "default-k8s-diff-port-632243" | sudo tee /etc/hostname
	I1129 09:16:02.084178  322024 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-632243
	
	I1129 09:16:02.084254  322024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:16:02.109444  322024 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:02.109747  322024 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1129 09:16:02.109782  322024 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-632243' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-632243/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-632243' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:16:02.260924  322024 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:16:02.260956  322024 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:16:02.260985  322024 ubuntu.go:190] setting up certificates
	I1129 09:16:02.260996  322024 provision.go:84] configureAuth start
	I1129 09:16:02.261052  322024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-632243
	I1129 09:16:02.281457  322024 provision.go:143] copyHostCerts
	I1129 09:16:02.281513  322024 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:16:02.281528  322024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:16:02.281615  322024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:16:02.281729  322024 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:16:02.281743  322024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:16:02.281782  322024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:16:02.281932  322024 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:16:02.281945  322024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:16:02.281979  322024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:16:02.282062  322024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-632243 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-632243 localhost minikube]
	I1129 09:16:02.363950  322024 provision.go:177] copyRemoteCerts
	I1129 09:16:02.364020  322024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:16:02.364070  322024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:16:02.389919  322024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:16:02.500939  322024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:16:02.527493  322024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1129 09:16:02.550827  322024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:16:02.575121  322024 provision.go:87] duration metric: took 314.110311ms to configureAuth
	I1129 09:16:02.575156  322024 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:16:02.575360  322024 config.go:182] Loaded profile config "default-k8s-diff-port-632243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:16:02.575488  322024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:16:02.601025  322024 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:02.601340  322024 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1129 09:16:02.601368  322024 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:16:02.917949  322024 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:16:02.917976  322024 machine.go:97] duration metric: took 1.200625311s to provisionDockerMachine
	I1129 09:16:02.917989  322024 client.go:176] duration metric: took 6.233350684s to LocalClient.Create
	I1129 09:16:02.918009  322024 start.go:167] duration metric: took 6.23341708s to libmachine.API.Create "default-k8s-diff-port-632243"
	I1129 09:16:02.918019  322024 start.go:293] postStartSetup for "default-k8s-diff-port-632243" (driver="docker")
	I1129 09:16:02.918032  322024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:16:02.918096  322024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:16:02.918155  322024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:16:02.940792  322024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:16:03.051208  322024 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:16:03.056167  322024 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:16:03.056197  322024 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:16:03.056209  322024 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:16:03.056272  322024 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:16:03.056363  322024 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:16:03.056486  322024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:16:03.066429  322024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:16:03.090243  322024 start.go:296] duration metric: took 172.207524ms for postStartSetup
	I1129 09:16:03.090655  322024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-632243
	I1129 09:16:03.114128  322024 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/config.json ...
	I1129 09:16:03.114461  322024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:16:03.114505  322024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:16:03.135337  322024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:16:03.240607  322024 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:16:03.247006  322024 start.go:128] duration metric: took 6.564554049s to createHost
	I1129 09:16:03.247036  322024 start.go:83] releasing machines lock for "default-k8s-diff-port-632243", held for 6.56471122s
	I1129 09:16:03.247122  322024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-632243
	I1129 09:16:03.271285  322024 ssh_runner.go:195] Run: cat /version.json
	I1129 09:16:03.271342  322024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:16:03.271437  322024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:16:03.271345  322024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:16:03.293459  322024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:16:03.293816  322024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:16:03.394607  322024 ssh_runner.go:195] Run: systemctl --version
	I1129 09:16:03.453589  322024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:16:03.494403  322024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:16:03.499326  322024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:16:03.499392  322024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:16:03.530446  322024 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:16:03.530465  322024 start.go:496] detecting cgroup driver to use...
	I1129 09:16:03.530507  322024 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:16:03.530550  322024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:16:03.552472  322024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:16:03.571028  322024 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:16:03.571091  322024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:16:03.596392  322024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:16:03.627131  322024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:16:03.742460  322024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:16:03.850186  322024 docker.go:234] disabling docker service ...
	I1129 09:16:03.850257  322024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:16:03.870022  322024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:16:03.884185  322024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:16:03.989136  322024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:16:04.092727  322024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:16:04.109608  322024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:16:04.128630  322024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:16:04.128695  322024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:04.140642  322024 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:16:04.140694  322024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:04.151468  322024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:04.162812  322024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:04.174735  322024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:16:04.185081  322024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:04.197134  322024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:04.216958  322024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:04.229306  322024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:16:04.240390  322024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:16:04.250167  322024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:16:04.344345  322024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:16:04.510335  322024 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:16:04.510399  322024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:16:04.514709  322024 start.go:564] Will wait 60s for crictl version
	I1129 09:16:04.514770  322024 ssh_runner.go:195] Run: which crictl
	I1129 09:16:04.518494  322024 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:16:04.547114  322024 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:16:04.547194  322024 ssh_runner.go:195] Run: crio --version
	I1129 09:16:04.579248  322024 ssh_runner.go:195] Run: crio --version
	I1129 09:16:04.611998  322024 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1129 09:16:02.060815  305400 node_ready.go:57] node "no-preload-897274" has "Ready":"False" status (will retry)
	I1129 09:16:03.560657  305400 node_ready.go:49] node "no-preload-897274" is "Ready"
	I1129 09:16:03.560688  305400 node_ready.go:38] duration metric: took 12.503797846s for node "no-preload-897274" to be "Ready" ...
	I1129 09:16:03.560705  305400 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:16:03.560753  305400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:16:03.577132  305400 api_server.go:72] duration metric: took 13.214530044s to wait for apiserver process to appear ...
	I1129 09:16:03.577199  305400 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:16:03.577226  305400 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1129 09:16:03.583160  305400 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1129 09:16:03.584405  305400 api_server.go:141] control plane version: v1.34.1
	I1129 09:16:03.584437  305400 api_server.go:131] duration metric: took 7.226939ms to wait for apiserver health ...
	I1129 09:16:03.584450  305400 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:16:03.591091  305400 system_pods.go:59] 8 kube-system pods found
	I1129 09:16:03.591132  305400 system_pods.go:61] "coredns-66bc5c9577-85hh2" [bece0447-4cd3-40a8-9624-df30d7eca5c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:16:03.591141  305400 system_pods.go:61] "etcd-no-preload-897274" [3cd5c513-b85a-447b-b37b-effb0e6ff78d] Running
	I1129 09:16:03.591150  305400 system_pods.go:61] "kindnet-jbmcv" [8f87e20c-ba19-4b3c-a04a-262e76f44c0d] Running
	I1129 09:16:03.591155  305400 system_pods.go:61] "kube-apiserver-no-preload-897274" [9a293987-5626-4358-aac2-071b87d4150e] Running
	I1129 09:16:03.591161  305400 system_pods.go:61] "kube-controller-manager-no-preload-897274" [08edb468-9de4-42da-ac0f-e3546e4c8119] Running
	I1129 09:16:03.591165  305400 system_pods.go:61] "kube-proxy-h9zhz" [fdfd4040-2a9c-4d7a-8fcb-d94a67351df9] Running
	I1129 09:16:03.591170  305400 system_pods.go:61] "kube-scheduler-no-preload-897274" [4a9224f6-82c9-48d5-92f2-1790269bc3dd] Running
	I1129 09:16:03.591184  305400 system_pods.go:61] "storage-provisioner" [434d3f91-300e-489c-864a-f221d18a7b07] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:16:03.591192  305400 system_pods.go:74] duration metric: took 6.735279ms to wait for pod list to return data ...
	I1129 09:16:03.591201  305400 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:16:03.594446  305400 default_sa.go:45] found service account: "default"
	I1129 09:16:03.594470  305400 default_sa.go:55] duration metric: took 3.2626ms for default service account to be created ...
	I1129 09:16:03.594478  305400 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:16:03.599151  305400 system_pods.go:86] 8 kube-system pods found
	I1129 09:16:03.599197  305400 system_pods.go:89] "coredns-66bc5c9577-85hh2" [bece0447-4cd3-40a8-9624-df30d7eca5c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:16:03.599207  305400 system_pods.go:89] "etcd-no-preload-897274" [3cd5c513-b85a-447b-b37b-effb0e6ff78d] Running
	I1129 09:16:03.599214  305400 system_pods.go:89] "kindnet-jbmcv" [8f87e20c-ba19-4b3c-a04a-262e76f44c0d] Running
	I1129 09:16:03.599219  305400 system_pods.go:89] "kube-apiserver-no-preload-897274" [9a293987-5626-4358-aac2-071b87d4150e] Running
	I1129 09:16:03.599232  305400 system_pods.go:89] "kube-controller-manager-no-preload-897274" [08edb468-9de4-42da-ac0f-e3546e4c8119] Running
	I1129 09:16:03.599237  305400 system_pods.go:89] "kube-proxy-h9zhz" [fdfd4040-2a9c-4d7a-8fcb-d94a67351df9] Running
	I1129 09:16:03.599247  305400 system_pods.go:89] "kube-scheduler-no-preload-897274" [4a9224f6-82c9-48d5-92f2-1790269bc3dd] Running
	I1129 09:16:03.599256  305400 system_pods.go:89] "storage-provisioner" [434d3f91-300e-489c-864a-f221d18a7b07] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:16:03.599282  305400 retry.go:31] will retry after 227.460534ms: missing components: kube-dns
	I1129 09:16:03.832038  305400 system_pods.go:86] 8 kube-system pods found
	I1129 09:16:03.832087  305400 system_pods.go:89] "coredns-66bc5c9577-85hh2" [bece0447-4cd3-40a8-9624-df30d7eca5c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:16:03.832097  305400 system_pods.go:89] "etcd-no-preload-897274" [3cd5c513-b85a-447b-b37b-effb0e6ff78d] Running
	I1129 09:16:03.832104  305400 system_pods.go:89] "kindnet-jbmcv" [8f87e20c-ba19-4b3c-a04a-262e76f44c0d] Running
	I1129 09:16:03.832111  305400 system_pods.go:89] "kube-apiserver-no-preload-897274" [9a293987-5626-4358-aac2-071b87d4150e] Running
	I1129 09:16:03.832129  305400 system_pods.go:89] "kube-controller-manager-no-preload-897274" [08edb468-9de4-42da-ac0f-e3546e4c8119] Running
	I1129 09:16:03.832134  305400 system_pods.go:89] "kube-proxy-h9zhz" [fdfd4040-2a9c-4d7a-8fcb-d94a67351df9] Running
	I1129 09:16:03.832144  305400 system_pods.go:89] "kube-scheduler-no-preload-897274" [4a9224f6-82c9-48d5-92f2-1790269bc3dd] Running
	I1129 09:16:03.832152  305400 system_pods.go:89] "storage-provisioner" [434d3f91-300e-489c-864a-f221d18a7b07] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:16:03.832174  305400 retry.go:31] will retry after 337.693699ms: missing components: kube-dns
	I1129 09:16:04.175040  305400 system_pods.go:86] 8 kube-system pods found
	I1129 09:16:04.175070  305400 system_pods.go:89] "coredns-66bc5c9577-85hh2" [bece0447-4cd3-40a8-9624-df30d7eca5c7] Running
	I1129 09:16:04.175078  305400 system_pods.go:89] "etcd-no-preload-897274" [3cd5c513-b85a-447b-b37b-effb0e6ff78d] Running
	I1129 09:16:04.175083  305400 system_pods.go:89] "kindnet-jbmcv" [8f87e20c-ba19-4b3c-a04a-262e76f44c0d] Running
	I1129 09:16:04.175089  305400 system_pods.go:89] "kube-apiserver-no-preload-897274" [9a293987-5626-4358-aac2-071b87d4150e] Running
	I1129 09:16:04.175095  305400 system_pods.go:89] "kube-controller-manager-no-preload-897274" [08edb468-9de4-42da-ac0f-e3546e4c8119] Running
	I1129 09:16:04.175100  305400 system_pods.go:89] "kube-proxy-h9zhz" [fdfd4040-2a9c-4d7a-8fcb-d94a67351df9] Running
	I1129 09:16:04.175107  305400 system_pods.go:89] "kube-scheduler-no-preload-897274" [4a9224f6-82c9-48d5-92f2-1790269bc3dd] Running
	I1129 09:16:04.175112  305400 system_pods.go:89] "storage-provisioner" [434d3f91-300e-489c-864a-f221d18a7b07] Running
	I1129 09:16:04.175121  305400 system_pods.go:126] duration metric: took 580.636902ms to wait for k8s-apps to be running ...
	I1129 09:16:04.175131  305400 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:16:04.175176  305400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:16:04.190038  305400 system_svc.go:56] duration metric: took 14.899084ms WaitForService to wait for kubelet
	I1129 09:16:04.190071  305400 kubeadm.go:587] duration metric: took 13.827475769s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:16:04.190114  305400 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:16:04.193280  305400 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:16:04.193311  305400 node_conditions.go:123] node cpu capacity is 8
	I1129 09:16:04.193329  305400 node_conditions.go:105] duration metric: took 3.207891ms to run NodePressure ...
	I1129 09:16:04.193366  305400 start.go:242] waiting for startup goroutines ...
	I1129 09:16:04.193379  305400 start.go:247] waiting for cluster config update ...
	I1129 09:16:04.193393  305400 start.go:256] writing updated cluster config ...
	I1129 09:16:04.193728  305400 ssh_runner.go:195] Run: rm -f paused
	I1129 09:16:04.198454  305400 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:16:04.202430  305400 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-85hh2" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:04.208098  305400 pod_ready.go:94] pod "coredns-66bc5c9577-85hh2" is "Ready"
	I1129 09:16:04.208126  305400 pod_ready.go:86] duration metric: took 5.663067ms for pod "coredns-66bc5c9577-85hh2" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:04.210817  305400 pod_ready.go:83] waiting for pod "etcd-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:04.216438  305400 pod_ready.go:94] pod "etcd-no-preload-897274" is "Ready"
	I1129 09:16:04.216471  305400 pod_ready.go:86] duration metric: took 5.629706ms for pod "etcd-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:04.219259  305400 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:04.224495  305400 pod_ready.go:94] pod "kube-apiserver-no-preload-897274" is "Ready"
	I1129 09:16:04.224524  305400 pod_ready.go:86] duration metric: took 5.241453ms for pod "kube-apiserver-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:04.227248  305400 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:04.603542  305400 pod_ready.go:94] pod "kube-controller-manager-no-preload-897274" is "Ready"
	I1129 09:16:04.603569  305400 pod_ready.go:86] duration metric: took 376.297664ms for pod "kube-controller-manager-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:04.802834  305400 pod_ready.go:83] waiting for pod "kube-proxy-h9zhz" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:05.202150  305400 pod_ready.go:94] pod "kube-proxy-h9zhz" is "Ready"
	I1129 09:16:05.202183  305400 pod_ready.go:86] duration metric: took 399.303986ms for pod "kube-proxy-h9zhz" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:05.403919  305400 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:05.803184  305400 pod_ready.go:94] pod "kube-scheduler-no-preload-897274" is "Ready"
	I1129 09:16:05.803224  305400 pod_ready.go:86] duration metric: took 399.281349ms for pod "kube-scheduler-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:05.803242  305400 pod_ready.go:40] duration metric: took 1.604752061s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:16:05.856773  305400 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:16:05.859221  305400 out.go:179] * Done! kubectl is now configured to use "no-preload-897274" cluster and "default" namespace by default
	I1129 09:16:04.613392  322024 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-632243 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:16:04.632654  322024 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1129 09:16:04.637066  322024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:16:04.648305  322024 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-632243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:16:04.648415  322024 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:16:04.648468  322024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:16:04.681703  322024 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:16:04.681722  322024 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:16:04.681765  322024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:16:04.710093  322024 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:16:04.710113  322024 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:16:04.710121  322024 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1129 09:16:04.710219  322024 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-632243 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:16:04.710278  322024 ssh_runner.go:195] Run: crio config
	I1129 09:16:04.759835  322024 cni.go:84] Creating CNI manager for ""
	I1129 09:16:04.759898  322024 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:16:04.759919  322024 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:16:04.759946  322024 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-632243 NodeName:default-k8s-diff-port-632243 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:16:04.760082  322024 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-632243"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:16:04.760156  322024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:16:04.769007  322024 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:16:04.769079  322024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:16:04.778154  322024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1129 09:16:04.792393  322024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:16:04.809791  322024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1129 09:16:04.824767  322024 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:16:04.828927  322024 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:16:04.839835  322024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:16:04.922357  322024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:16:04.952816  322024 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243 for IP: 192.168.103.2
	I1129 09:16:04.952860  322024 certs.go:195] generating shared ca certs ...
	I1129 09:16:04.952883  322024 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:04.953059  322024 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:16:04.953111  322024 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:16:04.953125  322024 certs.go:257] generating profile certs ...
	I1129 09:16:04.953191  322024 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/client.key
	I1129 09:16:04.953208  322024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/client.crt with IP's: []
	I1129 09:16:05.004108  322024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/client.crt ...
	I1129 09:16:05.004137  322024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/client.crt: {Name:mka041ed80308253195f121f21ca6ab4746fe4f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.004322  322024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/client.key ...
	I1129 09:16:05.004341  322024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/client.key: {Name:mkeb62654ba6c4c83f85bf392959628a788b9823 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.004469  322024 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.key.6a7d6562
	I1129 09:16:05.004486  322024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.crt.6a7d6562 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1129 09:16:05.097237  322024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.crt.6a7d6562 ...
	I1129 09:16:05.097265  322024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.crt.6a7d6562: {Name:mk119c10fc917f731c0eb70860a6327c353a773a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.097476  322024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.key.6a7d6562 ...
	I1129 09:16:05.097494  322024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.key.6a7d6562: {Name:mk9dc2f3f0b7c9f52aa0bd192bf5ff49a4f52b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.097600  322024 certs.go:382] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.crt.6a7d6562 -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.crt
	I1129 09:16:05.097687  322024 certs.go:386] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.key.6a7d6562 -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.key
	I1129 09:16:05.097744  322024 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/proxy-client.key
	I1129 09:16:05.097759  322024 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/proxy-client.crt with IP's: []
	I1129 09:16:05.205376  322024 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/proxy-client.crt ...
	I1129 09:16:05.205406  322024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/proxy-client.crt: {Name:mk40295d623b8d811d97f15971ae75906bdebb6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.205590  322024 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/proxy-client.key ...
	I1129 09:16:05.205602  322024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/proxy-client.key: {Name:mk690b65ab4dd55c37c899df35e4c32805838179 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:05.205767  322024 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:16:05.205805  322024 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:16:05.205815  322024 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:16:05.205851  322024 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:16:05.205876  322024 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:16:05.205904  322024 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:16:05.205945  322024 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:16:05.206615  322024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:16:05.226091  322024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:16:05.245635  322024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:16:05.264912  322024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:16:05.284585  322024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1129 09:16:05.304929  322024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:16:05.326530  322024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:16:05.348337  322024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:16:05.367443  322024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:16:05.388137  322024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:16:05.407900  322024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:16:05.428402  322024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:16:05.444574  322024 ssh_runner.go:195] Run: openssl version
	I1129 09:16:05.451463  322024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:16:05.462040  322024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:05.466967  322024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:05.467034  322024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:05.506731  322024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:16:05.517748  322024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:16:05.527938  322024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:16:05.532987  322024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:16:05.533070  322024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:16:05.573481  322024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:16:05.583732  322024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:16:05.594450  322024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:16:05.599504  322024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:16:05.599576  322024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:16:05.639911  322024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:16:05.650594  322024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:16:05.654958  322024 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:16:05.655017  322024 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-632243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:16:05.655097  322024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:16:05.655159  322024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:16:05.688271  322024 cri.go:89] found id: ""
	I1129 09:16:05.688345  322024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:16:05.698535  322024 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:16:05.707893  322024 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:16:05.707955  322024 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:16:05.717378  322024 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:16:05.717399  322024 kubeadm.go:158] found existing configuration files:
	
	I1129 09:16:05.717454  322024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1129 09:16:05.726790  322024 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:16:05.726876  322024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:16:05.735877  322024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1129 09:16:05.746333  322024 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:16:05.746407  322024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:16:05.755352  322024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1129 09:16:05.765152  322024 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:16:05.765213  322024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:16:05.774489  322024 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1129 09:16:05.783224  322024 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:16:05.783288  322024 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:16:05.792089  322024 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:16:05.863598  322024 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 09:16:05.939908  322024 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 09:16:03.855745  318819 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:16:03.855954  318819 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-160987 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 09:16:04.030198  318819 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:16:04.030438  318819 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-160987 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 09:16:04.264992  318819 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:16:04.978366  318819 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:16:05.878106  318819 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:16:05.878473  318819 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:16:06.252521  318819 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:16:06.577448  318819 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:16:06.941684  318819 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:16:07.328891  318819 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:16:07.532380  318819 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:16:07.533107  318819 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:16:07.539536  318819 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:16:07.541525  318819 out.go:252]   - Booting up control plane ...
	I1129 09:16:07.541663  318819 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:16:07.541779  318819 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:16:07.542864  318819 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:16:07.564864  318819 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:16:07.565005  318819 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:16:07.574594  318819 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:16:07.574883  318819 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:16:07.575420  318819 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:16:07.689698  318819 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:16:07.689897  318819 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:16:08.691403  318819 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001850522s
	I1129 09:16:08.694640  318819 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:16:08.694769  318819 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1129 09:16:08.694951  318819 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:16:08.695072  318819 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 09:16:09.965274  318819 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.270546718s
	I1129 09:16:11.262165  318819 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.567396956s
	I1129 09:16:13.196065  318819 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501337609s
	I1129 09:16:13.208528  318819 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:16:13.220754  318819 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:16:13.233753  318819 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:16:13.234096  318819 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-160987 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:16:13.245302  318819 kubeadm.go:319] [bootstrap-token] Using token: 8wvvpy.rnc8he8uar2rfxfo
	I1129 09:16:13.246899  318819 out.go:252]   - Configuring RBAC rules ...
	I1129 09:16:13.247072  318819 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:16:13.250759  318819 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:16:13.257616  318819 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:16:13.260759  318819 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:16:13.263783  318819 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:16:13.266809  318819 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:16:13.603679  318819 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:16:14.029206  318819 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:16:14.603010  318819 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:16:14.604286  318819 kubeadm.go:319] 
	I1129 09:16:14.604389  318819 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:16:14.604409  318819 kubeadm.go:319] 
	I1129 09:16:14.604498  318819 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:16:14.604506  318819 kubeadm.go:319] 
	I1129 09:16:14.604536  318819 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:16:14.604607  318819 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:16:14.604670  318819 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:16:14.604678  318819 kubeadm.go:319] 
	I1129 09:16:14.604741  318819 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:16:14.604749  318819 kubeadm.go:319] 
	I1129 09:16:14.604805  318819 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:16:14.604814  318819 kubeadm.go:319] 
	I1129 09:16:14.604906  318819 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:16:14.604999  318819 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:16:14.605081  318819 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:16:14.605089  318819 kubeadm.go:319] 
	I1129 09:16:14.605187  318819 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:16:14.605279  318819 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:16:14.605286  318819 kubeadm.go:319] 
	I1129 09:16:14.605387  318819 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8wvvpy.rnc8he8uar2rfxfo \
	I1129 09:16:14.605507  318819 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 \
	I1129 09:16:14.605531  318819 kubeadm.go:319] 	--control-plane 
	I1129 09:16:14.605536  318819 kubeadm.go:319] 
	I1129 09:16:14.605634  318819 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:16:14.605645  318819 kubeadm.go:319] 
	I1129 09:16:14.605739  318819 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8wvvpy.rnc8he8uar2rfxfo \
	I1129 09:16:14.605868  318819 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 
	I1129 09:16:14.610227  318819 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 09:16:14.610395  318819 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 09:16:14.610424  318819 cni.go:84] Creating CNI manager for ""
	I1129 09:16:14.610433  318819 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:16:14.612765  318819 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Nov 29 09:16:03 no-preload-897274 crio[767]: time="2025-11-29T09:16:03.614147713Z" level=info msg="Starting container: 4759ed4628d1d284f9f543ec32c9500839b74a9b334e04ff1632010cbe9fdd6f" id=83e8f0cc-70fb-4b00-b068-c7f84ef5d032 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:16:03 no-preload-897274 crio[767]: time="2025-11-29T09:16:03.617254035Z" level=info msg="Started container" PID=2905 containerID=4759ed4628d1d284f9f543ec32c9500839b74a9b334e04ff1632010cbe9fdd6f description=kube-system/coredns-66bc5c9577-85hh2/coredns id=83e8f0cc-70fb-4b00-b068-c7f84ef5d032 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b1206dbcf1b2e6f62874b4887f8d8503e4b52783b6594501fc404ebcb38a6cc9
	Nov 29 09:16:06 no-preload-897274 crio[767]: time="2025-11-29T09:16:06.331715394Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7d3bbfa7-6208-4df7-8140-d8268f9d2cde name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:16:06 no-preload-897274 crio[767]: time="2025-11-29T09:16:06.331798206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:16:06 no-preload-897274 crio[767]: time="2025-11-29T09:16:06.33716072Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2a888a024d442ff4ee387254490d99f70046f4c7ec7ac07d29d8aa4121af541a UID:3251ffcc-aef4-4718-b927-af59fc9befca NetNS:/var/run/netns/7c689d35-bea5-45a6-988e-83e632e00369 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000cc3220}] Aliases:map[]}"
	Nov 29 09:16:06 no-preload-897274 crio[767]: time="2025-11-29T09:16:06.337202714Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 29 09:16:06 no-preload-897274 crio[767]: time="2025-11-29T09:16:06.348077937Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2a888a024d442ff4ee387254490d99f70046f4c7ec7ac07d29d8aa4121af541a UID:3251ffcc-aef4-4718-b927-af59fc9befca NetNS:/var/run/netns/7c689d35-bea5-45a6-988e-83e632e00369 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000cc3220}] Aliases:map[]}"
	Nov 29 09:16:06 no-preload-897274 crio[767]: time="2025-11-29T09:16:06.348271292Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 29 09:16:06 no-preload-897274 crio[767]: time="2025-11-29T09:16:06.349112725Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 29 09:16:06 no-preload-897274 crio[767]: time="2025-11-29T09:16:06.350058036Z" level=info msg="Ran pod sandbox 2a888a024d442ff4ee387254490d99f70046f4c7ec7ac07d29d8aa4121af541a with infra container: default/busybox/POD" id=7d3bbfa7-6208-4df7-8140-d8268f9d2cde name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:16:06 no-preload-897274 crio[767]: time="2025-11-29T09:16:06.351484866Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=25ec3799-a492-4e66-ac2b-c375d8f44fea name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:16:06 no-preload-897274 crio[767]: time="2025-11-29T09:16:06.351636273Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=25ec3799-a492-4e66-ac2b-c375d8f44fea name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:16:06 no-preload-897274 crio[767]: time="2025-11-29T09:16:06.351680541Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=25ec3799-a492-4e66-ac2b-c375d8f44fea name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:16:06 no-preload-897274 crio[767]: time="2025-11-29T09:16:06.352417321Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8c7d8956-86d4-40a3-8045-d50b07e773b8 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:16:06 no-preload-897274 crio[767]: time="2025-11-29T09:16:06.353997792Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:16:07 no-preload-897274 crio[767]: time="2025-11-29T09:16:07.59915903Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=8c7d8956-86d4-40a3-8045-d50b07e773b8 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:16:07 no-preload-897274 crio[767]: time="2025-11-29T09:16:07.599821006Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d9d891a6-7909-4e77-ab77-81b8b51a5068 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:16:07 no-preload-897274 crio[767]: time="2025-11-29T09:16:07.60172925Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=da36b54e-51cb-4c61-8710-06cbc6e0e80a name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:16:07 no-preload-897274 crio[767]: time="2025-11-29T09:16:07.60705064Z" level=info msg="Creating container: default/busybox/busybox" id=2c25b21e-c2d3-4819-b5ca-9d433298efe7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:16:07 no-preload-897274 crio[767]: time="2025-11-29T09:16:07.6072403Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:16:07 no-preload-897274 crio[767]: time="2025-11-29T09:16:07.614264551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:16:07 no-preload-897274 crio[767]: time="2025-11-29T09:16:07.615003937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:16:07 no-preload-897274 crio[767]: time="2025-11-29T09:16:07.642827447Z" level=info msg="Created container aa2cf7bf4adb54b6df3866497d2e499293a070460ef59f66221f436f04a3f7b6: default/busybox/busybox" id=2c25b21e-c2d3-4819-b5ca-9d433298efe7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:16:07 no-preload-897274 crio[767]: time="2025-11-29T09:16:07.643561093Z" level=info msg="Starting container: aa2cf7bf4adb54b6df3866497d2e499293a070460ef59f66221f436f04a3f7b6" id=f3dab8e0-7383-4217-bcb1-24edb4f10183 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:16:07 no-preload-897274 crio[767]: time="2025-11-29T09:16:07.646026864Z" level=info msg="Started container" PID=2974 containerID=aa2cf7bf4adb54b6df3866497d2e499293a070460ef59f66221f436f04a3f7b6 description=default/busybox/busybox id=f3dab8e0-7383-4217-bcb1-24edb4f10183 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2a888a024d442ff4ee387254490d99f70046f4c7ec7ac07d29d8aa4121af541a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	aa2cf7bf4adb5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   2a888a024d442       busybox                                     default
	4759ed4628d1d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   b1206dbcf1b2e       coredns-66bc5c9577-85hh2                    kube-system
	cc9d303d502d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   860ab358371cc       storage-provisioner                         kube-system
	d2cbc1d66f326       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   e875a7349067d       kindnet-jbmcv                               kube-system
	abc233d34c61b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   21ddeb849e89a       kube-proxy-h9zhz                            kube-system
	5be8ce7b16cb9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      36 seconds ago      Running             kube-controller-manager   0                   b817c744e93b5       kube-controller-manager-no-preload-897274   kube-system
	58e496065fd97       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      36 seconds ago      Running             kube-apiserver            0                   1a7371956483f       kube-apiserver-no-preload-897274            kube-system
	232a6c70a51bc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      36 seconds ago      Running             kube-scheduler            0                   e356ac65691f3       kube-scheduler-no-preload-897274            kube-system
	e111ff13d3b1a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      36 seconds ago      Running             etcd                      0                   7a173494b9d03       etcd-no-preload-897274                      kube-system
	
	
	==> coredns [4759ed4628d1d284f9f543ec32c9500839b74a9b334e04ff1632010cbe9fdd6f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39018 - 43472 "HINFO IN 478383856641690766.5612445598597958130. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.01042872s
	
	
	==> describe nodes <==
	Name:               no-preload-897274
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-897274
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=no-preload-897274
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_15_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:15:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-897274
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:16:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:16:15 +0000   Sat, 29 Nov 2025 09:15:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:16:15 +0000   Sat, 29 Nov 2025 09:15:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:16:15 +0000   Sat, 29 Nov 2025 09:15:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:16:15 +0000   Sat, 29 Nov 2025 09:16:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-897274
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                fc2d6958-d45c-48d6-8525-65c7170610ae
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-85hh2                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-897274                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-jbmcv                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-897274             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-no-preload-897274    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-h9zhz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-897274             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node no-preload-897274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node no-preload-897274 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node no-preload-897274 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node no-preload-897274 event: Registered Node no-preload-897274 in Controller
	  Normal  NodeReady                12s   kubelet          Node no-preload-897274 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 e1 87 e4 84 ae 08 06
	[  +0.002640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[ +14.778105] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +13.520989] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 32 aa eb 52 77 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +14.762906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df ae 93 da f6 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[Nov29 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[ +13.297626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	[  +7.390906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 68 f3 3d 3c e9 08 06
	[  +0.000436] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[  +7.145922] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea c3 2f 2a a3 01 08 06
	[  +0.000415] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	
	
	==> etcd [e111ff13d3b1af0b624d253ae20e8b63c006f74826d59adfd405cf3d7f3657c6] <==
	{"level":"warn","ts":"2025-11-29T09:15:41.728070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:15:41.736275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:15:41.745882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:15:41.761379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:15:41.766312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:15:41.774968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:15:41.786475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:15:41.848129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:15:45.932967Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.970385ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-29T09:15:45.933098Z","caller":"traceutil/trace.go:172","msg":"trace[1817130160] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:263; }","duration":"120.1151ms","start":"2025-11-29T09:15:45.812958Z","end":"2025-11-29T09:15:45.933074Z","steps":["trace[1817130160] 'range keys from in-memory index tree'  (duration: 119.94585ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:15:46.090254Z","caller":"traceutil/trace.go:172","msg":"trace[851067848] transaction","detail":"{read_only:false; number_of_response:0; response_revision:265; }","duration":"101.684558ms","start":"2025-11-29T09:15:45.988549Z","end":"2025-11-29T09:15:46.090234Z","steps":["trace[851067848] 'process raft request'  (duration: 101.437624ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:15:46.090326Z","caller":"traceutil/trace.go:172","msg":"trace[1875571216] transaction","detail":"{read_only:false; number_of_response:0; response_revision:265; }","duration":"101.758895ms","start":"2025-11-29T09:15:45.988549Z","end":"2025-11-29T09:15:46.090308Z","steps":["trace[1875571216] 'process raft request'  (duration: 101.508125ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:15:46.090354Z","caller":"traceutil/trace.go:172","msg":"trace[860126999] transaction","detail":"{read_only:false; number_of_response:0; response_revision:265; }","duration":"101.788771ms","start":"2025-11-29T09:15:45.988549Z","end":"2025-11-29T09:15:46.090338Z","steps":["trace[860126999] 'process raft request'  (duration: 101.538388ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:15:46.090398Z","caller":"traceutil/trace.go:172","msg":"trace[2135084293] transaction","detail":"{read_only:false; number_of_response:0; response_revision:265; }","duration":"101.826435ms","start":"2025-11-29T09:15:45.988550Z","end":"2025-11-29T09:15:46.090376Z","steps":["trace[2135084293] 'process raft request'  (duration: 101.480933ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:15:46.090460Z","caller":"traceutil/trace.go:172","msg":"trace[1247141841] transaction","detail":"{read_only:false; response_revision:265; number_of_response:1; }","duration":"136.876076ms","start":"2025-11-29T09:15:45.953570Z","end":"2025-11-29T09:15:46.090446Z","steps":["trace[1247141841] 'process raft request'  (duration: 85.80598ms)","trace[1247141841] 'compare'  (duration: 50.440031ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-29T09:15:53.224929Z","caller":"traceutil/trace.go:172","msg":"trace[15286410] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"110.124241ms","start":"2025-11-29T09:15:53.114787Z","end":"2025-11-29T09:15:53.224912Z","steps":["trace[15286410] 'process raft request'  (duration: 82.424852ms)","trace[15286410] 'compare'  (duration: 27.542855ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-29T09:15:53.394384Z","caller":"traceutil/trace.go:172","msg":"trace[212280164] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"159.236642ms","start":"2025-11-29T09:15:53.235128Z","end":"2025-11-29T09:15:53.394365Z","steps":["trace[212280164] 'process raft request'  (duration: 152.920279ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:15:54.127087Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.82285ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-no-preload-897274\" limit:1 ","response":"range_response_count:1 size:7900"}
	{"level":"info","ts":"2025-11-29T09:15:54.127173Z","caller":"traceutil/trace.go:172","msg":"trace[734329126] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-no-preload-897274; range_end:; response_count:1; response_revision:389; }","duration":"110.92506ms","start":"2025-11-29T09:15:54.016231Z","end":"2025-11-29T09:15:54.127156Z","steps":["trace[734329126] 'range keys from in-memory index tree'  (duration: 110.715744ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:15:54.278968Z","caller":"traceutil/trace.go:172","msg":"trace[795451868] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"145.687256ms","start":"2025-11-29T09:15:54.133265Z","end":"2025-11-29T09:15:54.278952Z","steps":["trace[795451868] 'process raft request'  (duration: 145.505214ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:15:54.402813Z","caller":"traceutil/trace.go:172","msg":"trace[377890220] linearizableReadLoop","detail":"{readStateIndex:403; appliedIndex:403; }","duration":"121.00487ms","start":"2025-11-29T09:15:54.281787Z","end":"2025-11-29T09:15:54.402792Z","steps":["trace[377890220] 'read index received'  (duration: 120.997846ms)","trace[377890220] 'applied index is now lower than readState.Index'  (duration: 5.872µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:15:54.474905Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"193.095526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-897274\" limit:1 ","response":"range_response_count:1 size:7513"}
	{"level":"info","ts":"2025-11-29T09:15:54.474975Z","caller":"traceutil/trace.go:172","msg":"trace[587542656] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-no-preload-897274; range_end:; response_count:1; response_revision:390; }","duration":"193.177922ms","start":"2025-11-29T09:15:54.281779Z","end":"2025-11-29T09:15:54.474957Z","steps":["trace[587542656] 'agreement among raft nodes before linearized reading'  (duration: 121.100864ms)","trace[587542656] 'range keys from in-memory index tree'  (duration: 71.858692ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-29T09:15:54.475068Z","caller":"traceutil/trace.go:172","msg":"trace[1513463331] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"292.911176ms","start":"2025-11-29T09:15:54.182132Z","end":"2025-11-29T09:15:54.475043Z","steps":["trace[1513463331] 'process raft request'  (duration: 220.767101ms)","trace[1513463331] 'compare'  (duration: 71.886458ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:15:59.695190Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.438586ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766494668301026 > lease_revoke:<id:5b339acee584a0fb>","response":"size:29"}
	
	
	==> kernel <==
	 09:16:15 up 58 min,  0 user,  load average: 5.93, 4.11, 2.46
	Linux no-preload-897274 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d2cbc1d66f326eb1ab9b18bae0f2bc14035ef8f553a3ddc90c37c20e02fd90d1] <==
	I1129 09:15:52.572196       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:15:52.667451       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1129 09:15:52.667642       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:15:52.667663       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:15:52.667700       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:15:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:15:52.873744       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:15:52.873955       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:15:52.874006       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:15:53.067360       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:15:53.175207       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:15:53.175241       1 metrics.go:72] Registering metrics
	I1129 09:15:53.175316       1 controller.go:711] "Syncing nftables rules"
	I1129 09:16:02.880021       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1129 09:16:02.880078       1 main.go:301] handling current node
	I1129 09:16:12.874990       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1129 09:16:12.875030       1 main.go:301] handling current node
	
	
	==> kube-apiserver [58e496065fd979bc54dc57f09d5652ec390dfa8d8705cd20d0352c2f06fd2a7f] <==
	E1129 09:15:42.465648       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1129 09:15:42.510992       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:15:42.519323       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:15:42.519469       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:15:42.527437       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:15:42.528600       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:15:42.600909       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:15:43.313456       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:15:43.318458       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:15:43.318482       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:15:43.973183       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:15:44.022020       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:15:44.118358       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:15:44.125161       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1129 09:15:44.126575       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:15:44.132216       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:15:44.342310       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:15:45.058967       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:15:45.082915       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:15:45.111077       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:15:50.043594       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:15:50.346725       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1129 09:15:50.465991       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:15:50.517864       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1129 09:16:14.155895       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:56120: use of closed network connection
	
	
	==> kube-controller-manager [5be8ce7b16cb981b945696d8cfe2e1393fcd6c8a66a4775e9146637d7b4526bb] <==
	I1129 09:15:49.326007       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:15:49.327217       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 09:15:49.336216       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 09:15:49.339652       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 09:15:49.339712       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:15:49.339828       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:15:49.339949       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-897274"
	I1129 09:15:49.340015       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1129 09:15:49.340039       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 09:15:49.340994       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 09:15:49.341014       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 09:15:49.341082       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:15:49.341202       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:15:49.341324       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:15:49.341474       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:15:49.341514       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:15:49.341754       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:15:49.344199       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:15:49.347532       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:15:49.347680       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:15:49.348409       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:15:49.353119       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:15:49.362301       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 09:15:49.374946       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:16:04.341974       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [abc233d34c61bd9fafac2c9b3f066130243927b3d9edf06882556161c4c04b5e] <==
	I1129 09:15:52.420415       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:15:52.495061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:15:52.596195       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:15:52.596237       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1129 09:15:52.596336       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:15:52.622035       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:15:52.622095       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:15:52.629282       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:15:52.629779       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:15:52.629813       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:15:52.632243       1 config.go:200] "Starting service config controller"
	I1129 09:15:52.635347       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:15:52.633509       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:15:52.635455       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:15:52.633583       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:15:52.635470       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:15:52.632177       1 config.go:309] "Starting node config controller"
	I1129 09:15:52.635481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:15:52.635487       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:15:52.735610       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:15:52.735624       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:15:52.735610       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [232a6c70a51bc7dcbe028624f8f44b26420889aa7c5f5bc1ffec9b7b7336820f] <==
	E1129 09:15:42.377603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:15:42.379710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:15:42.380018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:15:42.381732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:15:42.381804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:15:42.381822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:15:42.381913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:15:42.381929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:15:42.381981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:15:42.381993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:15:42.382061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:15:43.254788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1129 09:15:43.282822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:15:43.283644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:15:43.328059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:15:43.369184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:15:43.407576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:15:43.415081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:15:43.441196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:15:43.446318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:15:43.461655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:15:43.489569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:15:43.636879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:15:43.728094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1129 09:15:45.464632       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:15:46 no-preload-897274 kubelet[2308]: I1129 09:15:46.129243    2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-897274" podStartSLOduration=1.129219967 podStartE2EDuration="1.129219967s" podCreationTimestamp="2025-11-29 09:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:15:46.129153587 +0000 UTC m=+1.285028403" watchObservedRunningTime="2025-11-29 09:15:46.129219967 +0000 UTC m=+1.285094783"
	Nov 29 09:15:46 no-preload-897274 kubelet[2308]: I1129 09:15:46.130306    2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-897274" podStartSLOduration=1.1302777019999999 podStartE2EDuration="1.130277702s" podCreationTimestamp="2025-11-29 09:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:15:46.117288022 +0000 UTC m=+1.273162840" watchObservedRunningTime="2025-11-29 09:15:46.130277702 +0000 UTC m=+1.286152520"
	Nov 29 09:15:49 no-preload-897274 kubelet[2308]: I1129 09:15:49.329237    2308 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:15:49 no-preload-897274 kubelet[2308]: I1129 09:15:49.330496    2308 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:15:50 no-preload-897274 kubelet[2308]: E1129 09:15:50.413611    2308 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:no-preload-897274\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-897274' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 29 09:15:50 no-preload-897274 kubelet[2308]: I1129 09:15:50.480156    2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8f87e20c-ba19-4b3c-a04a-262e76f44c0d-cni-cfg\") pod \"kindnet-jbmcv\" (UID: \"8f87e20c-ba19-4b3c-a04a-262e76f44c0d\") " pod="kube-system/kindnet-jbmcv"
	Nov 29 09:15:50 no-preload-897274 kubelet[2308]: I1129 09:15:50.480222    2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f87e20c-ba19-4b3c-a04a-262e76f44c0d-lib-modules\") pod \"kindnet-jbmcv\" (UID: \"8f87e20c-ba19-4b3c-a04a-262e76f44c0d\") " pod="kube-system/kindnet-jbmcv"
	Nov 29 09:15:50 no-preload-897274 kubelet[2308]: I1129 09:15:50.480250    2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f87e20c-ba19-4b3c-a04a-262e76f44c0d-xtables-lock\") pod \"kindnet-jbmcv\" (UID: \"8f87e20c-ba19-4b3c-a04a-262e76f44c0d\") " pod="kube-system/kindnet-jbmcv"
	Nov 29 09:15:50 no-preload-897274 kubelet[2308]: I1129 09:15:50.480279    2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9zxk\" (UniqueName: \"kubernetes.io/projected/8f87e20c-ba19-4b3c-a04a-262e76f44c0d-kube-api-access-v9zxk\") pod \"kindnet-jbmcv\" (UID: \"8f87e20c-ba19-4b3c-a04a-262e76f44c0d\") " pod="kube-system/kindnet-jbmcv"
	Nov 29 09:15:50 no-preload-897274 kubelet[2308]: I1129 09:15:50.480322    2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fdfd4040-2a9c-4d7a-8fcb-d94a67351df9-kube-proxy\") pod \"kube-proxy-h9zhz\" (UID: \"fdfd4040-2a9c-4d7a-8fcb-d94a67351df9\") " pod="kube-system/kube-proxy-h9zhz"
	Nov 29 09:15:50 no-preload-897274 kubelet[2308]: I1129 09:15:50.480369    2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdfd4040-2a9c-4d7a-8fcb-d94a67351df9-lib-modules\") pod \"kube-proxy-h9zhz\" (UID: \"fdfd4040-2a9c-4d7a-8fcb-d94a67351df9\") " pod="kube-system/kube-proxy-h9zhz"
	Nov 29 09:15:50 no-preload-897274 kubelet[2308]: I1129 09:15:50.480408    2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdfd4040-2a9c-4d7a-8fcb-d94a67351df9-xtables-lock\") pod \"kube-proxy-h9zhz\" (UID: \"fdfd4040-2a9c-4d7a-8fcb-d94a67351df9\") " pod="kube-system/kube-proxy-h9zhz"
	Nov 29 09:15:50 no-preload-897274 kubelet[2308]: I1129 09:15:50.480433    2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brqz6\" (UniqueName: \"kubernetes.io/projected/fdfd4040-2a9c-4d7a-8fcb-d94a67351df9-kube-api-access-brqz6\") pod \"kube-proxy-h9zhz\" (UID: \"fdfd4040-2a9c-4d7a-8fcb-d94a67351df9\") " pod="kube-system/kube-proxy-h9zhz"
	Nov 29 09:15:51 no-preload-897274 kubelet[2308]: E1129 09:15:51.585590    2308 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Nov 29 09:15:51 no-preload-897274 kubelet[2308]: E1129 09:15:51.585712    2308 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fdfd4040-2a9c-4d7a-8fcb-d94a67351df9-kube-proxy podName:fdfd4040-2a9c-4d7a-8fcb-d94a67351df9 nodeName:}" failed. No retries permitted until 2025-11-29 09:15:52.08568957 +0000 UTC m=+7.241564389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/fdfd4040-2a9c-4d7a-8fcb-d94a67351df9-kube-proxy") pod "kube-proxy-h9zhz" (UID: "fdfd4040-2a9c-4d7a-8fcb-d94a67351df9") : failed to sync configmap cache: timed out waiting for the condition
	Nov 29 09:15:53 no-preload-897274 kubelet[2308]: I1129 09:15:53.104964    2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h9zhz" podStartSLOduration=3.104939322 podStartE2EDuration="3.104939322s" podCreationTimestamp="2025-11-29 09:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:15:53.102451648 +0000 UTC m=+8.258326470" watchObservedRunningTime="2025-11-29 09:15:53.104939322 +0000 UTC m=+8.260814144"
	Nov 29 09:15:53 no-preload-897274 kubelet[2308]: I1129 09:15:53.396111    2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-jbmcv" podStartSLOduration=1.73947749 podStartE2EDuration="3.39608882s" podCreationTimestamp="2025-11-29 09:15:50 +0000 UTC" firstStartedPulling="2025-11-29 09:15:50.704392055 +0000 UTC m=+5.860266857" lastFinishedPulling="2025-11-29 09:15:52.361003373 +0000 UTC m=+7.516878187" observedRunningTime="2025-11-29 09:15:53.395942726 +0000 UTC m=+8.551817548" watchObservedRunningTime="2025-11-29 09:15:53.39608882 +0000 UTC m=+8.551963641"
	Nov 29 09:16:03 no-preload-897274 kubelet[2308]: I1129 09:16:03.208880    2308 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:16:03 no-preload-897274 kubelet[2308]: I1129 09:16:03.274903    2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bece0447-4cd3-40a8-9624-df30d7eca5c7-config-volume\") pod \"coredns-66bc5c9577-85hh2\" (UID: \"bece0447-4cd3-40a8-9624-df30d7eca5c7\") " pod="kube-system/coredns-66bc5c9577-85hh2"
	Nov 29 09:16:03 no-preload-897274 kubelet[2308]: I1129 09:16:03.274970    2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kjph\" (UniqueName: \"kubernetes.io/projected/bece0447-4cd3-40a8-9624-df30d7eca5c7-kube-api-access-4kjph\") pod \"coredns-66bc5c9577-85hh2\" (UID: \"bece0447-4cd3-40a8-9624-df30d7eca5c7\") " pod="kube-system/coredns-66bc5c9577-85hh2"
	Nov 29 09:16:03 no-preload-897274 kubelet[2308]: I1129 09:16:03.275005    2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/434d3f91-300e-489c-864a-f221d18a7b07-tmp\") pod \"storage-provisioner\" (UID: \"434d3f91-300e-489c-864a-f221d18a7b07\") " pod="kube-system/storage-provisioner"
	Nov 29 09:16:03 no-preload-897274 kubelet[2308]: I1129 09:16:03.275029    2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fjfp\" (UniqueName: \"kubernetes.io/projected/434d3f91-300e-489c-864a-f221d18a7b07-kube-api-access-2fjfp\") pod \"storage-provisioner\" (UID: \"434d3f91-300e-489c-864a-f221d18a7b07\") " pod="kube-system/storage-provisioner"
	Nov 29 09:16:04 no-preload-897274 kubelet[2308]: I1129 09:16:04.065363    2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.065338263 podStartE2EDuration="13.065338263s" podCreationTimestamp="2025-11-29 09:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:04.053116906 +0000 UTC m=+19.208991729" watchObservedRunningTime="2025-11-29 09:16:04.065338263 +0000 UTC m=+19.221213085"
	Nov 29 09:16:04 no-preload-897274 kubelet[2308]: I1129 09:16:04.065723    2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-85hh2" podStartSLOduration=14.0657111 podStartE2EDuration="14.0657111s" podCreationTimestamp="2025-11-29 09:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:04.065146316 +0000 UTC m=+19.221021152" watchObservedRunningTime="2025-11-29 09:16:04.0657111 +0000 UTC m=+19.221585923"
	Nov 29 09:16:06 no-preload-897274 kubelet[2308]: I1129 09:16:06.096896    2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsdhn\" (UniqueName: \"kubernetes.io/projected/3251ffcc-aef4-4718-b927-af59fc9befca-kube-api-access-fsdhn\") pod \"busybox\" (UID: \"3251ffcc-aef4-4718-b927-af59fc9befca\") " pod="default/busybox"
	
	
	==> storage-provisioner [cc9d303d502d4f4ddd6e1001097ff09a4a99064e2b5f0fa11ebd208ea2d672a9] <==
	I1129 09:16:03.630567       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:16:03.644944       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:16:03.645033       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:16:03.648342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:03.655968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:16:03.656296       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:16:03.656539       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-897274_5f42f17b-054b-49d6-9f9c-76414a0a3001!
	I1129 09:16:03.656428       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"effd3485-4df8-4871-84ed-37c153135089", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-897274_5f42f17b-054b-49d6-9f9c-76414a0a3001 became leader
	W1129 09:16:03.659337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:03.664708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:16:03.757288       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-897274_5f42f17b-054b-49d6-9f9c-76414a0a3001!
	W1129 09:16:05.669291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:05.674374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:07.678215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:07.682421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:09.687158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:09.693183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:11.696920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:11.701370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:13.705272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:13.709687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:15.716304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:15.720972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-897274 -n no-preload-897274
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-897274 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-160987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-160987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (338.843127ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:16:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-160987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-160987 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-160987 describe deploy/metrics-server -n kube-system: exit status 1 (82.907059ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-160987 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-160987
helpers_test.go:243: (dbg) docker inspect embed-certs-160987:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8",
	        "Created": "2025-11-29T09:15:55.293730055Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 321211,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:15:55.343966717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8/hosts",
	        "LogPath": "/var/lib/docker/containers/7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8/7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8-json.log",
	        "Name": "/embed-certs-160987",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-160987:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-160987",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8",
	                "LowerDir": "/var/lib/docker/overlay2/338bc42e1b80ba62e9fe902fb732aa26dedd5005037b5297154c97608cba7a83-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/338bc42e1b80ba62e9fe902fb732aa26dedd5005037b5297154c97608cba7a83/merged",
	                "UpperDir": "/var/lib/docker/overlay2/338bc42e1b80ba62e9fe902fb732aa26dedd5005037b5297154c97608cba7a83/diff",
	                "WorkDir": "/var/lib/docker/overlay2/338bc42e1b80ba62e9fe902fb732aa26dedd5005037b5297154c97608cba7a83/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-160987",
	                "Source": "/var/lib/docker/volumes/embed-certs-160987/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-160987",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-160987",
	                "name.minikube.sigs.k8s.io": "embed-certs-160987",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4e0c41cfa3433200e9e240409d81bc99812eed1a09d09bbde7143b8c610408ad",
	            "SandboxKey": "/var/run/docker/netns/4e0c41cfa343",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-160987": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8f9ed915c5ff4babba294a5f95692de1cf5aa6f0db70276e7d083db5e7930b90",
	                    "EndpointID": "ca94b9b8a0d02511b40b70e24c4483e49fc637d0991384db06d1e4066b4373a0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "7e:ad:2f:4c:3d:82",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-160987",
	                        "7b45c51a2614"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-160987 -n embed-certs-160987
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-160987 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-160987 logs -n 25: (1.235633276s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-628644 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p bridge-628644 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p bridge-628644 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo containerd config dump                                                                                                                                                                                                  │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo crio config                                                                                                                                                                                                             │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p bridge-628644                                                                                                                                                                                                                              │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p disable-driver-mounts-327778                                                                                                                                                                                                               │ disable-driver-mounts-327778 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-680646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p old-k8s-version-680646 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-897274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p no-preload-897274 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-680646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-680646 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ start   │ -p no-preload-897274 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-160987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:16:33
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:16:33.122678  331191 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:16:33.122964  331191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:16:33.122976  331191 out.go:374] Setting ErrFile to fd 2...
	I1129 09:16:33.122983  331191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:16:33.123284  331191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:16:33.123913  331191 out.go:368] Setting JSON to false
	I1129 09:16:33.125477  331191 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3545,"bootTime":1764404248,"procs":399,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:16:33.125566  331191 start.go:143] virtualization: kvm guest
	I1129 09:16:33.127567  331191 out.go:179] * [no-preload-897274] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:16:33.129322  331191 notify.go:221] Checking for updates...
	I1129 09:16:33.129396  331191 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:16:33.133381  331191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:16:33.135622  331191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:16:33.136799  331191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:16:33.138360  331191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:16:33.141984  331191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:16:33.144186  331191 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:16:33.144997  331191 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:16:33.171095  331191 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:16:33.171229  331191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:16:33.235991  331191 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-29 09:16:33.224229786 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:16:33.236090  331191 docker.go:319] overlay module found
	I1129 09:16:33.238584  331191 out.go:179] * Using the docker driver based on existing profile
	I1129 09:16:33.239751  331191 start.go:309] selected driver: docker
	I1129 09:16:33.239767  331191 start.go:927] validating driver "docker" against &{Name:no-preload-897274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-897274 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:16:33.239938  331191 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:16:33.240643  331191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:16:33.304676  331191 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-29 09:16:33.294656443 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:16:33.304989  331191 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:16:33.305022  331191 cni.go:84] Creating CNI manager for ""
	I1129 09:16:33.305082  331191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:16:33.305121  331191 start.go:353] cluster config:
	{Name:no-preload-897274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-897274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:16:33.306884  331191 out.go:179] * Starting "no-preload-897274" primary control-plane node in "no-preload-897274" cluster
	I1129 09:16:33.308057  331191 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:16:33.309446  331191 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:16:33.310547  331191 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:16:33.310649  331191 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:16:33.310704  331191 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/config.json ...
	I1129 09:16:33.310903  331191 cache.go:107] acquiring lock: {Name:mk8f7573c1bcf364ee3e869844e236299ef911a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.310957  331191 cache.go:107] acquiring lock: {Name:mk3d47c34f6428afe07538d6b2903bd93c895587 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.310990  331191 cache.go:107] acquiring lock: {Name:mkd8c083b40056ddf2bcea6e5d97bd63c854310f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.311021  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1129 09:16:33.310956  331191 cache.go:107] acquiring lock: {Name:mk8aac6c82be99816e28146313299368d69d5087 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.311008  331191 cache.go:107] acquiring lock: {Name:mk422b9f5e82d6d6cab524cfb12c9a0d353a9e30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.311032  331191 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 148.823µs
	I1129 09:16:33.311049  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1129 09:16:33.310911  331191 cache.go:107] acquiring lock: {Name:mk40dde31b69aa254af83ecc3b922eeafac6b928 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.311059  331191 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 75.229µs
	I1129 09:16:33.311068  331191 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1129 09:16:33.311049  331191 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1129 09:16:33.311072  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1129 09:16:33.311085  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1129 09:16:33.311086  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1129 09:16:33.311086  331191 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 130.998µs
	I1129 09:16:33.311093  331191 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 206.951µs
	I1129 09:16:33.311096  331191 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1129 09:16:33.311096  331191 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 163.829µs
	I1129 09:16:33.311101  331191 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1129 09:16:33.311106  331191 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1129 09:16:33.311043  331191 cache.go:107] acquiring lock: {Name:mk1202721af231e365c67615309450a51ff4e3b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.311114  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1129 09:16:33.311122  331191 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 161.828µs
	I1129 09:16:33.311139  331191 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1129 09:16:33.311074  331191 cache.go:107] acquiring lock: {Name:mka04b02303b6e225ac2b476db413ffbfd8b53c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.311197  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1129 09:16:33.311222  331191 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 232.578µs
	I1129 09:16:33.311233  331191 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1129 09:16:33.311259  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1129 09:16:33.311275  331191 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 249.776µs
	I1129 09:16:33.311282  331191 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1129 09:16:33.311289  331191 cache.go:87] Successfully saved all images to host disk.
	I1129 09:16:33.333608  331191 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:16:33.333631  331191 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:16:33.333651  331191 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:16:33.333689  331191 start.go:360] acquireMachinesLock for no-preload-897274: {Name:mk26d63983c64bd83bbc5a0fb0c10ac2c7be5a49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.333757  331191 start.go:364] duration metric: took 46.1µs to acquireMachinesLock for "no-preload-897274"
	I1129 09:16:33.333778  331191 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:16:33.333786  331191 fix.go:54] fixHost starting: 
	I1129 09:16:33.334036  331191 cli_runner.go:164] Run: docker container inspect no-preload-897274 --format={{.State.Status}}
	I1129 09:16:33.352834  331191 fix.go:112] recreateIfNeeded on no-preload-897274: state=Stopped err=<nil>
	W1129 09:16:33.352882  331191 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 09:16:32.041544  322024 node_ready.go:49] node "default-k8s-diff-port-632243" is "Ready"
	I1129 09:16:32.041571  322024 node_ready.go:38] duration metric: took 11.003549148s for node "default-k8s-diff-port-632243" to be "Ready" ...
	I1129 09:16:32.041585  322024 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:16:32.041642  322024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:16:32.054761  322024 api_server.go:72] duration metric: took 11.352970675s to wait for apiserver process to appear ...
	I1129 09:16:32.054785  322024 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:16:32.054802  322024 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1129 09:16:32.060196  322024 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1129 09:16:32.061315  322024 api_server.go:141] control plane version: v1.34.1
	I1129 09:16:32.061345  322024 api_server.go:131] duration metric: took 6.553174ms to wait for apiserver health ...
	I1129 09:16:32.061356  322024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:16:32.065191  322024 system_pods.go:59] 8 kube-system pods found
	I1129 09:16:32.065228  322024 system_pods.go:61] "coredns-66bc5c9577-z4m7c" [98358d85-a090-44af-b52c-b5043215489d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:16:32.065236  322024 system_pods.go:61] "etcd-default-k8s-diff-port-632243" [09a34b15-fbfc-4348-90c4-e24e6baf1a19] Running
	I1129 09:16:32.065251  322024 system_pods.go:61] "kindnet-tpstm" [15e600f0-69fa-43be-ad87-07a80e245c73] Running
	I1129 09:16:32.065258  322024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-632243" [05294706-b493-4660-8b69-19a3686ec539] Running
	I1129 09:16:32.065266  322024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-632243" [fb12ecb8-1c38-404c-b1f5-c52bd3c76ae3] Running
	I1129 09:16:32.065273  322024 system_pods.go:61] "kube-proxy-p2nf7" [50905f73-5af2-401c-a482-7d68d8d3bdc4] Running
	I1129 09:16:32.065282  322024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-632243" [31003176-dbcb-4f15-88c6-ea1592ffdf1b] Running
	I1129 09:16:32.065290  322024 system_pods.go:61] "storage-provisioner" [b28962e0-c388-44d7-8e57-e4030e80dabd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:16:32.065303  322024 system_pods.go:74] duration metric: took 3.937854ms to wait for pod list to return data ...
	I1129 09:16:32.065316  322024 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:16:32.067970  322024 default_sa.go:45] found service account: "default"
	I1129 09:16:32.067992  322024 default_sa.go:55] duration metric: took 2.670246ms for default service account to be created ...
	I1129 09:16:32.068003  322024 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:16:32.070916  322024 system_pods.go:86] 8 kube-system pods found
	I1129 09:16:32.070942  322024 system_pods.go:89] "coredns-66bc5c9577-z4m7c" [98358d85-a090-44af-b52c-b5043215489d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:16:32.070948  322024 system_pods.go:89] "etcd-default-k8s-diff-port-632243" [09a34b15-fbfc-4348-90c4-e24e6baf1a19] Running
	I1129 09:16:32.070955  322024 system_pods.go:89] "kindnet-tpstm" [15e600f0-69fa-43be-ad87-07a80e245c73] Running
	I1129 09:16:32.070959  322024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-632243" [05294706-b493-4660-8b69-19a3686ec539] Running
	I1129 09:16:32.070970  322024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-632243" [fb12ecb8-1c38-404c-b1f5-c52bd3c76ae3] Running
	I1129 09:16:32.070976  322024 system_pods.go:89] "kube-proxy-p2nf7" [50905f73-5af2-401c-a482-7d68d8d3bdc4] Running
	I1129 09:16:32.070980  322024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-632243" [31003176-dbcb-4f15-88c6-ea1592ffdf1b] Running
	I1129 09:16:32.070984  322024 system_pods.go:89] "storage-provisioner" [b28962e0-c388-44d7-8e57-e4030e80dabd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:16:32.071010  322024 retry.go:31] will retry after 234.871211ms: missing components: kube-dns
	I1129 09:16:32.310261  322024 system_pods.go:86] 8 kube-system pods found
	I1129 09:16:32.310290  322024 system_pods.go:89] "coredns-66bc5c9577-z4m7c" [98358d85-a090-44af-b52c-b5043215489d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:16:32.310299  322024 system_pods.go:89] "etcd-default-k8s-diff-port-632243" [09a34b15-fbfc-4348-90c4-e24e6baf1a19] Running
	I1129 09:16:32.310305  322024 system_pods.go:89] "kindnet-tpstm" [15e600f0-69fa-43be-ad87-07a80e245c73] Running
	I1129 09:16:32.310310  322024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-632243" [05294706-b493-4660-8b69-19a3686ec539] Running
	I1129 09:16:32.310313  322024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-632243" [fb12ecb8-1c38-404c-b1f5-c52bd3c76ae3] Running
	I1129 09:16:32.310318  322024 system_pods.go:89] "kube-proxy-p2nf7" [50905f73-5af2-401c-a482-7d68d8d3bdc4] Running
	I1129 09:16:32.310321  322024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-632243" [31003176-dbcb-4f15-88c6-ea1592ffdf1b] Running
	I1129 09:16:32.310326  322024 system_pods.go:89] "storage-provisioner" [b28962e0-c388-44d7-8e57-e4030e80dabd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:16:32.310344  322024 retry.go:31] will retry after 262.041893ms: missing components: kube-dns
	I1129 09:16:32.577309  322024 system_pods.go:86] 8 kube-system pods found
	I1129 09:16:32.577355  322024 system_pods.go:89] "coredns-66bc5c9577-z4m7c" [98358d85-a090-44af-b52c-b5043215489d] Running
	I1129 09:16:32.577363  322024 system_pods.go:89] "etcd-default-k8s-diff-port-632243" [09a34b15-fbfc-4348-90c4-e24e6baf1a19] Running
	I1129 09:16:32.577375  322024 system_pods.go:89] "kindnet-tpstm" [15e600f0-69fa-43be-ad87-07a80e245c73] Running
	I1129 09:16:32.577380  322024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-632243" [05294706-b493-4660-8b69-19a3686ec539] Running
	I1129 09:16:32.577386  322024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-632243" [fb12ecb8-1c38-404c-b1f5-c52bd3c76ae3] Running
	I1129 09:16:32.577391  322024 system_pods.go:89] "kube-proxy-p2nf7" [50905f73-5af2-401c-a482-7d68d8d3bdc4] Running
	I1129 09:16:32.577397  322024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-632243" [31003176-dbcb-4f15-88c6-ea1592ffdf1b] Running
	I1129 09:16:32.577414  322024 system_pods.go:89] "storage-provisioner" [b28962e0-c388-44d7-8e57-e4030e80dabd] Running
	I1129 09:16:32.577424  322024 system_pods.go:126] duration metric: took 509.413863ms to wait for k8s-apps to be running ...
	I1129 09:16:32.577433  322024 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:16:32.577486  322024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:16:32.591542  322024 system_svc.go:56] duration metric: took 14.099503ms WaitForService to wait for kubelet
	I1129 09:16:32.591571  322024 kubeadm.go:587] duration metric: took 11.889799292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:16:32.591590  322024 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:16:32.594960  322024 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:16:32.594988  322024 node_conditions.go:123] node cpu capacity is 8
	I1129 09:16:32.595001  322024 node_conditions.go:105] duration metric: took 3.406781ms to run NodePressure ...
	I1129 09:16:32.595016  322024 start.go:242] waiting for startup goroutines ...
	I1129 09:16:32.595025  322024 start.go:247] waiting for cluster config update ...
	I1129 09:16:32.595045  322024 start.go:256] writing updated cluster config ...
	I1129 09:16:32.595334  322024 ssh_runner.go:195] Run: rm -f paused
	I1129 09:16:32.599700  322024 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:16:32.603726  322024 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z4m7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:32.608492  322024 pod_ready.go:94] pod "coredns-66bc5c9577-z4m7c" is "Ready"
	I1129 09:16:32.608525  322024 pod_ready.go:86] duration metric: took 4.774667ms for pod "coredns-66bc5c9577-z4m7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:32.610647  322024 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:32.618009  322024 pod_ready.go:94] pod "etcd-default-k8s-diff-port-632243" is "Ready"
	I1129 09:16:32.618048  322024 pod_ready.go:86] duration metric: took 7.376123ms for pod "etcd-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:32.620590  322024 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:32.625290  322024 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-632243" is "Ready"
	I1129 09:16:32.625320  322024 pod_ready.go:86] duration metric: took 4.703069ms for pod "kube-apiserver-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:32.627886  322024 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:33.004800  322024 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-632243" is "Ready"
	I1129 09:16:33.004836  322024 pod_ready.go:86] duration metric: took 376.921962ms for pod "kube-controller-manager-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:33.204356  322024 pod_ready.go:83] waiting for pod "kube-proxy-p2nf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:33.604207  322024 pod_ready.go:94] pod "kube-proxy-p2nf7" is "Ready"
	I1129 09:16:33.604239  322024 pod_ready.go:86] duration metric: took 399.852528ms for pod "kube-proxy-p2nf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:33.804985  322024 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:34.204711  322024 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-632243" is "Ready"
	I1129 09:16:34.204740  322024 pod_ready.go:86] duration metric: took 399.726671ms for pod "kube-scheduler-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:34.204752  322024 pod_ready.go:40] duration metric: took 1.605019532s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:16:34.250891  322024 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:16:34.252792  322024 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-632243" cluster and "default" namespace by default
	I1129 09:16:31.667715  328395 addons.go:530] duration metric: took 3.740311794s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1129 09:16:31.669237  328395 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1129 09:16:31.669299  328395 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1129 09:16:32.165030  328395 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:16:32.169403  328395 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:16:32.170688  328395 api_server.go:141] control plane version: v1.28.0
	I1129 09:16:32.170713  328395 api_server.go:131] duration metric: took 506.706984ms to wait for apiserver health ...
	I1129 09:16:32.170721  328395 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:16:32.174656  328395 system_pods.go:59] 8 kube-system pods found
	I1129 09:16:32.174692  328395 system_pods.go:61] "coredns-5dd5756b68-lwg8c" [34b2ab35-01c8-443b-90eb-b685e98a561b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:16:32.174703  328395 system_pods.go:61] "etcd-old-k8s-version-680646" [76196bbf-d848-4229-bc5a-a643536ce9cf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:16:32.174721  328395 system_pods.go:61] "kindnet-xjmpm" [4c8108ed-0909-4754-ab0e-0d92a16cdeef] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 09:16:32.174732  328395 system_pods.go:61] "kube-apiserver-old-k8s-version-680646" [b8828a68-07a6-4028-9315-ea72656418e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:16:32.174743  328395 system_pods.go:61] "kube-controller-manager-old-k8s-version-680646" [73d8f9bb-055a-404b-b261-38de3be66dbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:16:32.174754  328395 system_pods.go:61] "kube-proxy-plgmf" [2911dadf-509a-47fb-80b1-7bad0dac803f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:16:32.174767  328395 system_pods.go:61] "kube-scheduler-old-k8s-version-680646" [d55c6c54-82fa-4dfc-bd16-473d13fb6004] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:16:32.174781  328395 system_pods.go:61] "storage-provisioner" [11cb0c11-4af9-4cf6-945c-a6dcb390a105] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:16:32.174791  328395 system_pods.go:74] duration metric: took 4.063224ms to wait for pod list to return data ...
	I1129 09:16:32.174805  328395 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:16:32.176909  328395 default_sa.go:45] found service account: "default"
	I1129 09:16:32.176929  328395 default_sa.go:55] duration metric: took 2.117219ms for default service account to be created ...
	I1129 09:16:32.176939  328395 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:16:32.180175  328395 system_pods.go:86] 8 kube-system pods found
	I1129 09:16:32.180203  328395 system_pods.go:89] "coredns-5dd5756b68-lwg8c" [34b2ab35-01c8-443b-90eb-b685e98a561b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:16:32.180214  328395 system_pods.go:89] "etcd-old-k8s-version-680646" [76196bbf-d848-4229-bc5a-a643536ce9cf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:16:32.180224  328395 system_pods.go:89] "kindnet-xjmpm" [4c8108ed-0909-4754-ab0e-0d92a16cdeef] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 09:16:32.180233  328395 system_pods.go:89] "kube-apiserver-old-k8s-version-680646" [b8828a68-07a6-4028-9315-ea72656418e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:16:32.180243  328395 system_pods.go:89] "kube-controller-manager-old-k8s-version-680646" [73d8f9bb-055a-404b-b261-38de3be66dbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:16:32.180256  328395 system_pods.go:89] "kube-proxy-plgmf" [2911dadf-509a-47fb-80b1-7bad0dac803f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:16:32.180278  328395 system_pods.go:89] "kube-scheduler-old-k8s-version-680646" [d55c6c54-82fa-4dfc-bd16-473d13fb6004] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:16:32.180287  328395 system_pods.go:89] "storage-provisioner" [11cb0c11-4af9-4cf6-945c-a6dcb390a105] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:16:32.180297  328395 system_pods.go:126] duration metric: took 3.351229ms to wait for k8s-apps to be running ...
	I1129 09:16:32.180311  328395 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:16:32.180366  328395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:16:32.193765  328395 system_svc.go:56] duration metric: took 13.445261ms WaitForService to wait for kubelet
	I1129 09:16:32.193802  328395 kubeadm.go:587] duration metric: took 4.266436616s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:16:32.193825  328395 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:16:32.196753  328395 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:16:32.196776  328395 node_conditions.go:123] node cpu capacity is 8
	I1129 09:16:32.196791  328395 node_conditions.go:105] duration metric: took 2.960533ms to run NodePressure ...
	I1129 09:16:32.196803  328395 start.go:242] waiting for startup goroutines ...
	I1129 09:16:32.196813  328395 start.go:247] waiting for cluster config update ...
	I1129 09:16:32.196825  328395 start.go:256] writing updated cluster config ...
	I1129 09:16:32.197113  328395 ssh_runner.go:195] Run: rm -f paused
	I1129 09:16:32.201201  328395 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:16:32.205784  328395 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-lwg8c" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:16:34.212642  328395 pod_ready.go:104] pod "coredns-5dd5756b68-lwg8c" is not "Ready", error: <nil>
	I1129 09:16:33.354760  331191 out.go:252] * Restarting existing docker container for "no-preload-897274" ...
	I1129 09:16:33.354860  331191 cli_runner.go:164] Run: docker start no-preload-897274
	I1129 09:16:33.624223  331191 cli_runner.go:164] Run: docker container inspect no-preload-897274 --format={{.State.Status}}
	I1129 09:16:33.645157  331191 kic.go:430] container "no-preload-897274" state is running.
	I1129 09:16:33.645691  331191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-897274
	I1129 09:16:33.665737  331191 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/config.json ...
	I1129 09:16:33.665984  331191 machine.go:94] provisionDockerMachine start ...
	I1129 09:16:33.666057  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:33.686402  331191 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:33.686650  331191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1129 09:16:33.686662  331191 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:16:33.687367  331191 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56400->127.0.0.1:33114: read: connection reset by peer
	I1129 09:16:36.835205  331191 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-897274
	
	I1129 09:16:36.835230  331191 ubuntu.go:182] provisioning hostname "no-preload-897274"
	I1129 09:16:36.835279  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:36.854479  331191 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:36.854694  331191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1129 09:16:36.854706  331191 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-897274 && echo "no-preload-897274" | sudo tee /etc/hostname
	I1129 09:16:37.011492  331191 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-897274
	
	I1129 09:16:37.011594  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:37.032132  331191 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:37.032367  331191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1129 09:16:37.032392  331191 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-897274' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-897274/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-897274' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:16:37.179015  331191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:16:37.179046  331191 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:16:37.179074  331191 ubuntu.go:190] setting up certificates
	I1129 09:16:37.179086  331191 provision.go:84] configureAuth start
	I1129 09:16:37.179151  331191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-897274
	I1129 09:16:37.198996  331191 provision.go:143] copyHostCerts
	I1129 09:16:37.199055  331191 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:16:37.199063  331191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:16:37.199130  331191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:16:37.199243  331191 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:16:37.199252  331191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:16:37.199277  331191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:16:37.199345  331191 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:16:37.199353  331191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:16:37.199375  331191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:16:37.199450  331191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.no-preload-897274 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-897274]
	I1129 09:16:37.400467  331191 provision.go:177] copyRemoteCerts
	I1129 09:16:37.400522  331191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:16:37.400560  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:37.420190  331191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/no-preload-897274/id_rsa Username:docker}
	I1129 09:16:37.524234  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:16:37.544115  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:16:37.563441  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:16:37.582291  331191 provision.go:87] duration metric: took 403.186988ms to configureAuth
	I1129 09:16:37.582321  331191 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:16:37.582571  331191 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:16:37.582697  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:37.601871  331191 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:37.602123  331191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1129 09:16:37.602158  331191 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:16:37.955218  331191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:16:37.955244  331191 machine.go:97] duration metric: took 4.289245118s to provisionDockerMachine
	I1129 09:16:37.955260  331191 start.go:293] postStartSetup for "no-preload-897274" (driver="docker")
	I1129 09:16:37.955272  331191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:16:37.955350  331191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:16:37.955395  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:37.975484  331191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/no-preload-897274/id_rsa Username:docker}
	I1129 09:16:38.079833  331191 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:16:38.083749  331191 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:16:38.083775  331191 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:16:38.083789  331191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:16:38.083875  331191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:16:38.083978  331191 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:16:38.084107  331191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:16:38.092555  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:16:38.111681  331191 start.go:296] duration metric: took 156.404597ms for postStartSetup
	I1129 09:16:38.111767  331191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:16:38.111814  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:38.131932  331191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/no-preload-897274/id_rsa Username:docker}
	I1129 09:16:38.233206  331191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:16:38.237977  331191 fix.go:56] duration metric: took 4.904178025s for fixHost
	I1129 09:16:38.238000  331191 start.go:83] releasing machines lock for "no-preload-897274", held for 4.904232748s
	I1129 09:16:38.238060  331191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-897274
	I1129 09:16:38.257959  331191 ssh_runner.go:195] Run: cat /version.json
	I1129 09:16:38.258032  331191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:16:38.258096  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:38.258035  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:38.279574  331191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/no-preload-897274/id_rsa Username:docker}
	I1129 09:16:38.279939  331191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/no-preload-897274/id_rsa Username:docker}
	I1129 09:16:38.433984  331191 ssh_runner.go:195] Run: systemctl --version
	I1129 09:16:38.440652  331191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:16:38.476403  331191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:16:38.481199  331191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:16:38.481276  331191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:16:38.489386  331191 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:16:38.489411  331191 start.go:496] detecting cgroup driver to use...
	I1129 09:16:38.489443  331191 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:16:38.489484  331191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:16:38.505200  331191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:16:38.518647  331191 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:16:38.518720  331191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:16:38.533897  331191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:16:38.547412  331191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:16:38.629367  331191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:16:38.708463  331191 docker.go:234] disabling docker service ...
	I1129 09:16:38.708534  331191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:16:38.724572  331191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:16:38.737770  331191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:16:38.823099  331191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:16:38.907650  331191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:16:38.921257  331191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:16:38.937465  331191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:16:38.937528  331191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:38.947545  331191 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:16:38.947620  331191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:38.958460  331191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:38.967937  331191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:38.977748  331191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:16:38.986532  331191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:38.996268  331191 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:39.005517  331191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:39.015263  331191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:16:39.024145  331191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:16:39.032514  331191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:16:39.117372  331191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:16:39.268699  331191 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:16:39.268766  331191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:16:39.273304  331191 start.go:564] Will wait 60s for crictl version
	I1129 09:16:39.273361  331191 ssh_runner.go:195] Run: which crictl
	I1129 09:16:39.277381  331191 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:16:39.302873  331191 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:16:39.302976  331191 ssh_runner.go:195] Run: crio --version
	I1129 09:16:39.332229  331191 ssh_runner.go:195] Run: crio --version
	I1129 09:16:39.363237  331191 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1129 09:16:36.712359  328395 pod_ready.go:104] pod "coredns-5dd5756b68-lwg8c" is not "Ready", error: <nil>
	W1129 09:16:39.211811  328395 pod_ready.go:104] pod "coredns-5dd5756b68-lwg8c" is not "Ready", error: <nil>
	I1129 09:16:39.364448  331191 cli_runner.go:164] Run: docker network inspect no-preload-897274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:16:39.383343  331191 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1129 09:16:39.387690  331191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:16:39.398885  331191 kubeadm.go:884] updating cluster {Name:no-preload-897274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-897274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:16:39.398992  331191 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:16:39.399022  331191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:16:39.433362  331191 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:16:39.433387  331191 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:16:39.433396  331191 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1129 09:16:39.433516  331191 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-897274 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-897274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:16:39.433643  331191 ssh_runner.go:195] Run: crio config
	I1129 09:16:39.482584  331191 cni.go:84] Creating CNI manager for ""
	I1129 09:16:39.482608  331191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:16:39.482625  331191 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:16:39.482651  331191 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-897274 NodeName:no-preload-897274 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:16:39.482809  331191 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-897274"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:16:39.482905  331191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:16:39.491848  331191 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:16:39.491930  331191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:16:39.500511  331191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 09:16:39.513978  331191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:16:39.527569  331191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1129 09:16:39.540985  331191 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:16:39.545061  331191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:16:39.556521  331191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:16:39.638452  331191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:16:39.672589  331191 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274 for IP: 192.168.94.2
	I1129 09:16:39.672616  331191 certs.go:195] generating shared ca certs ...
	I1129 09:16:39.672638  331191 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:39.672802  331191 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:16:39.672864  331191 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:16:39.672875  331191 certs.go:257] generating profile certs ...
	I1129 09:16:39.672955  331191 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/client.key
	I1129 09:16:39.673003  331191 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/apiserver.key.c2e76d87
	I1129 09:16:39.673040  331191 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/proxy-client.key
	I1129 09:16:39.673153  331191 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:16:39.673184  331191 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:16:39.673195  331191 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:16:39.673219  331191 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:16:39.673243  331191 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:16:39.673266  331191 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:16:39.673311  331191 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:16:39.673926  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:16:39.694826  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:16:39.716408  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:16:39.737064  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:16:39.762655  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:16:39.781753  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 09:16:39.800668  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:16:39.819193  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:16:39.837681  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:16:39.856326  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:16:39.876000  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:16:39.894816  331191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:16:39.909444  331191 ssh_runner.go:195] Run: openssl version
	I1129 09:16:39.916448  331191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:16:39.926248  331191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:16:39.930365  331191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:16:39.930419  331191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:16:39.966433  331191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:16:39.975157  331191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:16:39.984273  331191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:39.988577  331191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:39.988637  331191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:40.027295  331191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:16:40.037079  331191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:16:40.046476  331191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:16:40.050942  331191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:16:40.051015  331191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:16:40.087521  331191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:16:40.096305  331191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:16:40.100586  331191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:16:40.137427  331191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:16:40.188216  331191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:16:40.243216  331191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:16:40.303223  331191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:16:40.363689  331191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:16:40.404542  331191 kubeadm.go:401] StartCluster: {Name:no-preload-897274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-897274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:16:40.404619  331191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:16:40.404696  331191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:16:40.436648  331191 cri.go:89] found id: "65cad02ba2a7910bcdcffb28c773b53da4d5023aecfd588deeacf22d8dca4a38"
	I1129 09:16:40.436672  331191 cri.go:89] found id: "ad66b46c591ebaf67ffea99e3f782c8b3c848d695dab97ba85d7b414cf4c3170"
	I1129 09:16:40.436678  331191 cri.go:89] found id: "652695edd3b368ed64211f7ee974fad1ce2be0ae46ac90c153b50e751c36007b"
	I1129 09:16:40.436681  331191 cri.go:89] found id: "aef2b06f8a8ac95a822e5865d9062a9500764f567fb042a1dbeda8630e6e5914"
	I1129 09:16:40.436684  331191 cri.go:89] found id: ""
	I1129 09:16:40.436731  331191 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 09:16:40.450109  331191 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:16:40Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:16:40.450203  331191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:16:40.459516  331191 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:16:40.459543  331191 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:16:40.459611  331191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:16:40.467867  331191 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:16:40.469024  331191 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-897274" does not appear in /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:16:40.469958  331191 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-5652/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-897274" cluster setting kubeconfig missing "no-preload-897274" context setting]
	I1129 09:16:40.471513  331191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:40.473383  331191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:16:40.482322  331191 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1129 09:16:40.482376  331191 kubeadm.go:602] duration metric: took 22.824948ms to restartPrimaryControlPlane
	I1129 09:16:40.482397  331191 kubeadm.go:403] duration metric: took 77.857746ms to StartCluster
	I1129 09:16:40.482424  331191 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:40.482503  331191 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:16:40.484782  331191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:40.485100  331191 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:16:40.485203  331191 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:16:40.485308  331191 addons.go:70] Setting storage-provisioner=true in profile "no-preload-897274"
	I1129 09:16:40.485327  331191 addons.go:70] Setting dashboard=true in profile "no-preload-897274"
	I1129 09:16:40.485335  331191 addons.go:239] Setting addon storage-provisioner=true in "no-preload-897274"
	W1129 09:16:40.485344  331191 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:16:40.485342  331191 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:16:40.485346  331191 addons.go:239] Setting addon dashboard=true in "no-preload-897274"
	I1129 09:16:40.485372  331191 host.go:66] Checking if "no-preload-897274" exists ...
	I1129 09:16:40.485360  331191 addons.go:70] Setting default-storageclass=true in profile "no-preload-897274"
	W1129 09:16:40.485380  331191 addons.go:248] addon dashboard should already be in state true
	I1129 09:16:40.485404  331191 host.go:66] Checking if "no-preload-897274" exists ...
	I1129 09:16:40.485408  331191 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-897274"
	I1129 09:16:40.485736  331191 cli_runner.go:164] Run: docker container inspect no-preload-897274 --format={{.State.Status}}
	I1129 09:16:40.485916  331191 cli_runner.go:164] Run: docker container inspect no-preload-897274 --format={{.State.Status}}
	I1129 09:16:40.485938  331191 cli_runner.go:164] Run: docker container inspect no-preload-897274 --format={{.State.Status}}
	I1129 09:16:40.493922  331191 out.go:179] * Verifying Kubernetes components...
	I1129 09:16:40.497995  331191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:16:40.516019  331191 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:16:40.518690  331191 addons.go:239] Setting addon default-storageclass=true in "no-preload-897274"
	W1129 09:16:40.519019  331191 addons.go:248] addon default-storageclass should already be in state true
	I1129 09:16:40.519084  331191 host.go:66] Checking if "no-preload-897274" exists ...
	I1129 09:16:40.518884  331191 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 09:16:40.518990  331191 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:16:40.519274  331191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:16:40.519332  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:40.519572  331191 cli_runner.go:164] Run: docker container inspect no-preload-897274 --format={{.State.Status}}
	I1129 09:16:40.527026  331191 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	
	
	==> CRI-O <==
	Nov 29 09:16:30 embed-certs-160987 crio[776]: time="2025-11-29T09:16:30.857976976Z" level=info msg="Starting container: 5b889c207fdecce0a64122ef724563e29cf39ecb07cf1996bc885a468449337e" id=1e8d3e08-b284-4a20-a6c0-d3b93243b8cc name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:16:30 embed-certs-160987 crio[776]: time="2025-11-29T09:16:30.8602246Z" level=info msg="Started container" PID=1857 containerID=5b889c207fdecce0a64122ef724563e29cf39ecb07cf1996bc885a468449337e description=kube-system/coredns-66bc5c9577-ptx67/coredns id=1e8d3e08-b284-4a20-a6c0-d3b93243b8cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0275a0fd4e8c22e828da2f262a8a66ec82809cbac54e036f483f1328ee767ea
	Nov 29 09:16:33 embed-certs-160987 crio[776]: time="2025-11-29T09:16:33.441762682Z" level=info msg="Running pod sandbox: default/busybox/POD" id=1da4c7c1-3c60-4b88-86aa-a31ebe383c4f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:16:33 embed-certs-160987 crio[776]: time="2025-11-29T09:16:33.441833854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:16:33 embed-certs-160987 crio[776]: time="2025-11-29T09:16:33.447480971Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1cb794f51d1e35dbbc55fb5ff6fd6d25f314d8facd2835aa2d9d439f4c48a1c9 UID:f749e7c0-d4f3-41c1-987c-5653a82e08e5 NetNS:/var/run/netns/8a79d092-060a-4e01-ba72-b7658105bc69 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0000c2908}] Aliases:map[]}"
	Nov 29 09:16:33 embed-certs-160987 crio[776]: time="2025-11-29T09:16:33.447521798Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 29 09:16:33 embed-certs-160987 crio[776]: time="2025-11-29T09:16:33.460332177Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1cb794f51d1e35dbbc55fb5ff6fd6d25f314d8facd2835aa2d9d439f4c48a1c9 UID:f749e7c0-d4f3-41c1-987c-5653a82e08e5 NetNS:/var/run/netns/8a79d092-060a-4e01-ba72-b7658105bc69 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0000c2908}] Aliases:map[]}"
	Nov 29 09:16:33 embed-certs-160987 crio[776]: time="2025-11-29T09:16:33.460510907Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 29 09:16:33 embed-certs-160987 crio[776]: time="2025-11-29T09:16:33.46169668Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 29 09:16:33 embed-certs-160987 crio[776]: time="2025-11-29T09:16:33.462784884Z" level=info msg="Ran pod sandbox 1cb794f51d1e35dbbc55fb5ff6fd6d25f314d8facd2835aa2d9d439f4c48a1c9 with infra container: default/busybox/POD" id=1da4c7c1-3c60-4b88-86aa-a31ebe383c4f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:16:33 embed-certs-160987 crio[776]: time="2025-11-29T09:16:33.464283423Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c7dc6243-2277-4d24-b07b-bdab7e5626bf name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:16:33 embed-certs-160987 crio[776]: time="2025-11-29T09:16:33.464441866Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c7dc6243-2277-4d24-b07b-bdab7e5626bf name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:16:33 embed-certs-160987 crio[776]: time="2025-11-29T09:16:33.464488103Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c7dc6243-2277-4d24-b07b-bdab7e5626bf name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:16:33 embed-certs-160987 crio[776]: time="2025-11-29T09:16:33.465385536Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5ce7dfc2-8a3f-42c9-aa53-9180a4f97b16 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:16:33 embed-certs-160987 crio[776]: time="2025-11-29T09:16:33.469686664Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:16:34 embed-certs-160987 crio[776]: time="2025-11-29T09:16:34.778776563Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=5ce7dfc2-8a3f-42c9-aa53-9180a4f97b16 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:16:34 embed-certs-160987 crio[776]: time="2025-11-29T09:16:34.779587695Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a5783b56-2c7e-4ce6-be43-ef6739ef0df3 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:16:34 embed-certs-160987 crio[776]: time="2025-11-29T09:16:34.78101862Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=991a8aeb-6640-4508-8260-fa908b7b41a2 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:16:34 embed-certs-160987 crio[776]: time="2025-11-29T09:16:34.784223791Z" level=info msg="Creating container: default/busybox/busybox" id=ee91ee11-3751-4150-9cd9-74ec939b5933 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:16:34 embed-certs-160987 crio[776]: time="2025-11-29T09:16:34.784368249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:16:34 embed-certs-160987 crio[776]: time="2025-11-29T09:16:34.788241037Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:16:34 embed-certs-160987 crio[776]: time="2025-11-29T09:16:34.788722447Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:16:34 embed-certs-160987 crio[776]: time="2025-11-29T09:16:34.813753836Z" level=info msg="Created container 2ed654c75525ff1b53e7161829af90a1151524d9d7e0642ae28f648b50381250: default/busybox/busybox" id=ee91ee11-3751-4150-9cd9-74ec939b5933 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:16:34 embed-certs-160987 crio[776]: time="2025-11-29T09:16:34.814300114Z" level=info msg="Starting container: 2ed654c75525ff1b53e7161829af90a1151524d9d7e0642ae28f648b50381250" id=93ecf291-ae5f-4770-9f96-9ab3d1a70dae name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:16:34 embed-certs-160987 crio[776]: time="2025-11-29T09:16:34.816515746Z" level=info msg="Started container" PID=1936 containerID=2ed654c75525ff1b53e7161829af90a1151524d9d7e0642ae28f648b50381250 description=default/busybox/busybox id=93ecf291-ae5f-4770-9f96-9ab3d1a70dae name=/runtime.v1.RuntimeService/StartContainer sandboxID=1cb794f51d1e35dbbc55fb5ff6fd6d25f314d8facd2835aa2d9d439f4c48a1c9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	2ed654c75525f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   1cb794f51d1e3       busybox                                      default
	5b889c207fdec       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 seconds ago      Running             coredns                   0                   e0275a0fd4e8c       coredns-66bc5c9577-ptx67                     kube-system
	bcdf468cd0900       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   8cfe81c88a8ab       storage-provisioner                          kube-system
	18bbca95f554f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      22 seconds ago      Running             kube-proxy                0                   6ec99782dafd5       kube-proxy-57l9h                             kube-system
	e60f0af6879ca       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   062216c2ae3d0       kindnet-cvmj6                                kube-system
	c2ad11b427078       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      32 seconds ago      Running             etcd                      0                   380eb5a65c69a       etcd-embed-certs-160987                      kube-system
	3b4cbf49ce214       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      32 seconds ago      Running             kube-controller-manager   0                   a4a6359fc0ec8       kube-controller-manager-embed-certs-160987   kube-system
	2c3a2480bd2de       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      32 seconds ago      Running             kube-scheduler            0                   4c4dcf8d50a5e       kube-scheduler-embed-certs-160987            kube-system
	03ab286676d70       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      32 seconds ago      Running             kube-apiserver            0                   e16fe43dfef7c       kube-apiserver-embed-certs-160987            kube-system
	
	
	==> coredns [5b889c207fdecce0a64122ef724563e29cf39ecb07cf1996bc885a468449337e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50131 - 12053 "HINFO IN 8486196588115871450.6379079399516449813. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017571708s
	
	
	==> describe nodes <==
	Name:               embed-certs-160987
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-160987
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=embed-certs-160987
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_16_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:16:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-160987
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:16:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:16:34 +0000   Sat, 29 Nov 2025 09:16:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:16:34 +0000   Sat, 29 Nov 2025 09:16:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:16:34 +0000   Sat, 29 Nov 2025 09:16:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:16:34 +0000   Sat, 29 Nov 2025 09:16:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-160987
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                01febc21-6293-4ce5-852c-5d2b1b91b577
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-66bc5c9577-ptx67                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     22s
	  kube-system                 etcd-embed-certs-160987                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-cvmj6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-embed-certs-160987             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-embed-certs-160987    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-57l9h                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-embed-certs-160987             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 28s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s   kubelet          Node embed-certs-160987 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s   kubelet          Node embed-certs-160987 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s   kubelet          Node embed-certs-160987 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23s   node-controller  Node embed-certs-160987 event: Registered Node embed-certs-160987 in Controller
	  Normal  NodeReady                11s   kubelet          Node embed-certs-160987 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 e1 87 e4 84 ae 08 06
	[  +0.002640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[ +14.778105] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +13.520989] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 32 aa eb 52 77 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +14.762906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df ae 93 da f6 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[Nov29 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[ +13.297626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	[  +7.390906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 68 f3 3d 3c e9 08 06
	[  +0.000436] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[  +7.145922] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea c3 2f 2a a3 01 08 06
	[  +0.000415] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	
	
	==> etcd [c2ad11b427078a72f28264e6f298743b42c54b077e66036fd55d36ada5852832] <==
	{"level":"warn","ts":"2025-11-29T09:16:10.459037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.467208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.477597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.485078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.493185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.506321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.513887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.521345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.529193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.538209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.559963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.568348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.576655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.584770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.593101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.601407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.617686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.625595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.634304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.642054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.656133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.660178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.668225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.675120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:10.736429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52928","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:16:42 up 59 min,  0 user,  load average: 4.83, 4.03, 2.48
	Linux embed-certs-160987 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e60f0af6879cae88a94a2727605f40b6ee682c3f9e5ae1f5acf2105468a02038] <==
	I1129 09:16:19.731121       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:16:19.731452       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 09:16:19.731687       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:16:19.731710       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:16:19.731742       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:16:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:16:20.010580       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:16:20.010613       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:16:20.010624       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:16:20.029366       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:16:20.429340       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:16:20.429381       1 metrics.go:72] Registering metrics
	I1129 09:16:20.509726       1 controller.go:711] "Syncing nftables rules"
	I1129 09:16:30.012299       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:16:30.012353       1 main.go:301] handling current node
	I1129 09:16:40.014946       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:16:40.014987       1 main.go:301] handling current node
	
	
	==> kube-apiserver [03ab286676d70fb342f8b211d1d033e4d6c5bde8b4c88f73d5c4f09ed1cd578f] <==
	I1129 09:16:11.390000       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:16:11.393074       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:16:11.393291       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:16:11.399413       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:16:11.399447       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:16:11.399487       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:16:11.501888       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:16:12.195488       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:16:12.203479       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:16:12.203506       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:16:12.849437       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:16:12.894003       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:16:12.998468       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:16:13.007782       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1129 09:16:13.008986       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:16:13.013692       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:16:13.234504       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:16:14.012113       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:16:14.027483       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:16:14.037770       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:16:18.888045       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:16:18.893308       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:16:19.036759       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1129 09:16:19.335205       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1129 09:16:40.250433       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:47628: use of closed network connection
	
	
	==> kube-controller-manager [3b4cbf49ce214b8096eb86bf42f4e2162f41027243a29639f11ad59c883335f3] <==
	I1129 09:16:18.231833       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 09:16:18.231884       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 09:16:18.232220       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 09:16:18.232264       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:16:18.232401       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:16:18.232541       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-160987"
	I1129 09:16:18.232591       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1129 09:16:18.232851       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:16:18.232968       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:16:18.233082       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 09:16:18.233230       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:16:18.233403       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1129 09:16:18.233955       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:16:18.234090       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:16:18.235749       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:16:18.236154       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 09:16:18.237898       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 09:16:18.239217       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1129 09:16:18.239356       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:16:18.241521       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:16:18.243752       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:16:18.246909       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 09:16:18.260092       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 09:16:18.262414       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:16:33.234637       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [18bbca95f554f479e5f26b81da9aa91ce4417ba1c79e71455430e92d4159d5ba] <==
	I1129 09:16:19.457133       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:16:19.531463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:16:19.632460       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:16:19.632519       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1129 09:16:19.632673       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:16:19.678878       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:16:19.678959       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:16:19.692799       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:16:19.693763       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:16:19.694201       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:16:19.697299       1 config.go:200] "Starting service config controller"
	I1129 09:16:19.697321       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:16:19.697428       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:16:19.697459       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:16:19.697531       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:16:19.697564       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:16:19.698729       1 config.go:309] "Starting node config controller"
	I1129 09:16:19.698812       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:16:19.698885       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:16:19.797798       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:16:19.797836       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:16:19.797931       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2c3a2480bd2deff2d98d1a710702acfd7834089739831b60763a25c261516326] <==
	E1129 09:16:11.261423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:16:11.261466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:16:11.261490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:16:11.261534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:16:11.261547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:16:11.261645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:16:11.261659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:16:11.261260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:16:11.261972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:16:11.261987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:16:12.107220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:16:12.155811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:16:12.189313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:16:12.202626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:16:12.258104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:16:12.299531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:16:12.305802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:16:12.341149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:16:12.415972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:16:12.458466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:16:12.494012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:16:12.536640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:16:12.538563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:16:12.727030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1129 09:16:14.957373       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:16:14 embed-certs-160987 kubelet[1328]: I1129 09:16:14.984526    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-160987" podStartSLOduration=0.983711186 podStartE2EDuration="983.711186ms" podCreationTimestamp="2025-11-29 09:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:14.982553647 +0000 UTC m=+1.184060787" watchObservedRunningTime="2025-11-29 09:16:14.983711186 +0000 UTC m=+1.185218326"
	Nov 29 09:16:14 embed-certs-160987 kubelet[1328]: I1129 09:16:14.984743    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-160987" podStartSLOduration=0.984732717 podStartE2EDuration="984.732717ms" podCreationTimestamp="2025-11-29 09:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:14.959703957 +0000 UTC m=+1.161211097" watchObservedRunningTime="2025-11-29 09:16:14.984732717 +0000 UTC m=+1.186239857"
	Nov 29 09:16:15 embed-certs-160987 kubelet[1328]: I1129 09:16:15.010634    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-160987" podStartSLOduration=1.010609164 podStartE2EDuration="1.010609164s" podCreationTimestamp="2025-11-29 09:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:14.995486195 +0000 UTC m=+1.196993335" watchObservedRunningTime="2025-11-29 09:16:15.010609164 +0000 UTC m=+1.212116286"
	Nov 29 09:16:15 embed-certs-160987 kubelet[1328]: I1129 09:16:15.022335    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-160987" podStartSLOduration=1.022310936 podStartE2EDuration="1.022310936s" podCreationTimestamp="2025-11-29 09:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:15.010375544 +0000 UTC m=+1.211882683" watchObservedRunningTime="2025-11-29 09:16:15.022310936 +0000 UTC m=+1.223818072"
	Nov 29 09:16:18 embed-certs-160987 kubelet[1328]: I1129 09:16:18.247704    1328 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:16:18 embed-certs-160987 kubelet[1328]: I1129 09:16:18.248833    1328 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:16:19 embed-certs-160987 kubelet[1328]: I1129 09:16:19.117596    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/239c4b88-9d52-42da-ae39-5eb83d7d3fd1-xtables-lock\") pod \"kindnet-cvmj6\" (UID: \"239c4b88-9d52-42da-ae39-5eb83d7d3fd1\") " pod="kube-system/kindnet-cvmj6"
	Nov 29 09:16:19 embed-certs-160987 kubelet[1328]: I1129 09:16:19.117650    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgrzp\" (UniqueName: \"kubernetes.io/projected/239c4b88-9d52-42da-ae39-5eb83d7d3fd1-kube-api-access-wgrzp\") pod \"kindnet-cvmj6\" (UID: \"239c4b88-9d52-42da-ae39-5eb83d7d3fd1\") " pod="kube-system/kindnet-cvmj6"
	Nov 29 09:16:19 embed-certs-160987 kubelet[1328]: I1129 09:16:19.117702    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93cda014-998a-4285-81c6-bead54a287e2-xtables-lock\") pod \"kube-proxy-57l9h\" (UID: \"93cda014-998a-4285-81c6-bead54a287e2\") " pod="kube-system/kube-proxy-57l9h"
	Nov 29 09:16:19 embed-certs-160987 kubelet[1328]: I1129 09:16:19.117769    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/93cda014-998a-4285-81c6-bead54a287e2-kube-proxy\") pod \"kube-proxy-57l9h\" (UID: \"93cda014-998a-4285-81c6-bead54a287e2\") " pod="kube-system/kube-proxy-57l9h"
	Nov 29 09:16:19 embed-certs-160987 kubelet[1328]: I1129 09:16:19.117809    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22w5w\" (UniqueName: \"kubernetes.io/projected/93cda014-998a-4285-81c6-bead54a287e2-kube-api-access-22w5w\") pod \"kube-proxy-57l9h\" (UID: \"93cda014-998a-4285-81c6-bead54a287e2\") " pod="kube-system/kube-proxy-57l9h"
	Nov 29 09:16:19 embed-certs-160987 kubelet[1328]: I1129 09:16:19.117905    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/239c4b88-9d52-42da-ae39-5eb83d7d3fd1-cni-cfg\") pod \"kindnet-cvmj6\" (UID: \"239c4b88-9d52-42da-ae39-5eb83d7d3fd1\") " pod="kube-system/kindnet-cvmj6"
	Nov 29 09:16:19 embed-certs-160987 kubelet[1328]: I1129 09:16:19.117961    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/239c4b88-9d52-42da-ae39-5eb83d7d3fd1-lib-modules\") pod \"kindnet-cvmj6\" (UID: \"239c4b88-9d52-42da-ae39-5eb83d7d3fd1\") " pod="kube-system/kindnet-cvmj6"
	Nov 29 09:16:19 embed-certs-160987 kubelet[1328]: I1129 09:16:19.118018    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93cda014-998a-4285-81c6-bead54a287e2-lib-modules\") pod \"kube-proxy-57l9h\" (UID: \"93cda014-998a-4285-81c6-bead54a287e2\") " pod="kube-system/kube-proxy-57l9h"
	Nov 29 09:16:19 embed-certs-160987 kubelet[1328]: I1129 09:16:19.970601    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cvmj6" podStartSLOduration=0.970574998 podStartE2EDuration="970.574998ms" podCreationTimestamp="2025-11-29 09:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:19.970170278 +0000 UTC m=+6.171677420" watchObservedRunningTime="2025-11-29 09:16:19.970574998 +0000 UTC m=+6.172082138"
	Nov 29 09:16:19 embed-certs-160987 kubelet[1328]: I1129 09:16:19.970769    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-57l9h" podStartSLOduration=0.970754209 podStartE2EDuration="970.754209ms" podCreationTimestamp="2025-11-29 09:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:19.95278906 +0000 UTC m=+6.154296200" watchObservedRunningTime="2025-11-29 09:16:19.970754209 +0000 UTC m=+6.172261349"
	Nov 29 09:16:30 embed-certs-160987 kubelet[1328]: I1129 09:16:30.437337    1328 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:16:30 embed-certs-160987 kubelet[1328]: I1129 09:16:30.599698    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfqbr\" (UniqueName: \"kubernetes.io/projected/3cdde537-5064-49d7-8c8b-367639774c63-kube-api-access-rfqbr\") pod \"coredns-66bc5c9577-ptx67\" (UID: \"3cdde537-5064-49d7-8c8b-367639774c63\") " pod="kube-system/coredns-66bc5c9577-ptx67"
	Nov 29 09:16:30 embed-certs-160987 kubelet[1328]: I1129 09:16:30.599760    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3e04560b-9e25-4b2e-9f7e-d55b0ae42dbd-tmp\") pod \"storage-provisioner\" (UID: \"3e04560b-9e25-4b2e-9f7e-d55b0ae42dbd\") " pod="kube-system/storage-provisioner"
	Nov 29 09:16:30 embed-certs-160987 kubelet[1328]: I1129 09:16:30.599788    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cdde537-5064-49d7-8c8b-367639774c63-config-volume\") pod \"coredns-66bc5c9577-ptx67\" (UID: \"3cdde537-5064-49d7-8c8b-367639774c63\") " pod="kube-system/coredns-66bc5c9577-ptx67"
	Nov 29 09:16:30 embed-certs-160987 kubelet[1328]: I1129 09:16:30.599814    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8978q\" (UniqueName: \"kubernetes.io/projected/3e04560b-9e25-4b2e-9f7e-d55b0ae42dbd-kube-api-access-8978q\") pod \"storage-provisioner\" (UID: \"3e04560b-9e25-4b2e-9f7e-d55b0ae42dbd\") " pod="kube-system/storage-provisioner"
	Nov 29 09:16:31 embed-certs-160987 kubelet[1328]: I1129 09:16:31.007431    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ptx67" podStartSLOduration=12.007414094 podStartE2EDuration="12.007414094s" podCreationTimestamp="2025-11-29 09:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:30.987649006 +0000 UTC m=+17.189156146" watchObservedRunningTime="2025-11-29 09:16:31.007414094 +0000 UTC m=+17.208921233"
	Nov 29 09:16:31 embed-certs-160987 kubelet[1328]: I1129 09:16:31.007540    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.00753457 podStartE2EDuration="11.00753457s" podCreationTimestamp="2025-11-29 09:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:31.00686927 +0000 UTC m=+17.208376410" watchObservedRunningTime="2025-11-29 09:16:31.00753457 +0000 UTC m=+17.209041712"
	Nov 29 09:16:33 embed-certs-160987 kubelet[1328]: I1129 09:16:33.219186    1328 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rrqr\" (UniqueName: \"kubernetes.io/projected/f749e7c0-d4f3-41c1-987c-5653a82e08e5-kube-api-access-5rrqr\") pod \"busybox\" (UID: \"f749e7c0-d4f3-41c1-987c-5653a82e08e5\") " pod="default/busybox"
	Nov 29 09:16:34 embed-certs-160987 kubelet[1328]: I1129 09:16:34.992517    1328 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.676852558 podStartE2EDuration="1.992496229s" podCreationTimestamp="2025-11-29 09:16:33 +0000 UTC" firstStartedPulling="2025-11-29 09:16:33.464829551 +0000 UTC m=+19.666336689" lastFinishedPulling="2025-11-29 09:16:34.780473242 +0000 UTC m=+20.981980360" observedRunningTime="2025-11-29 09:16:34.992050517 +0000 UTC m=+21.193557657" watchObservedRunningTime="2025-11-29 09:16:34.992496229 +0000 UTC m=+21.194003360"
	
	
	==> storage-provisioner [bcdf468cd0900b7277e99ba09acc2ec858e8ad0871356aec781982eee4562365] <==
	I1129 09:16:30.868235       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:16:30.878031       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:16:30.878120       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:16:30.880694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:30.887462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:16:30.887672       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:16:30.887813       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-160987_7f0211db-523e-46d9-8a18-aa1fffa0bd0b!
	I1129 09:16:30.887833       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9795b373-c1b1-46fc-9f5b-0328f9c89ace", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-160987_7f0211db-523e-46d9-8a18-aa1fffa0bd0b became leader
	W1129 09:16:30.889976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:30.896144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:16:30.988108       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-160987_7f0211db-523e-46d9-8a18-aa1fffa0bd0b!
	W1129 09:16:32.899083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:32.903827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:34.907777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:34.912362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:36.916033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:36.921190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:38.924507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:38.928593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:40.932896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:40.937565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-160987 -n embed-certs-160987
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-160987 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-632243 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-632243 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (331.785265ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:16:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-632243 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-632243 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-632243 describe deploy/metrics-server -n kube-system: exit status 1 (97.967901ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-632243 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-632243
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-632243:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88",
	        "Created": "2025-11-29T09:16:00.909438015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 323121,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:16:00.947997595Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88/hostname",
	        "HostsPath": "/var/lib/docker/containers/34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88/hosts",
	        "LogPath": "/var/lib/docker/containers/34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88/34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88-json.log",
	        "Name": "/default-k8s-diff-port-632243",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-632243:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-632243",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88",
	                "LowerDir": "/var/lib/docker/overlay2/7263fb3772af2f1b363fa16d989f215dd7f46480236fb7471fbfb55fcc94f1fb-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7263fb3772af2f1b363fa16d989f215dd7f46480236fb7471fbfb55fcc94f1fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7263fb3772af2f1b363fa16d989f215dd7f46480236fb7471fbfb55fcc94f1fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7263fb3772af2f1b363fa16d989f215dd7f46480236fb7471fbfb55fcc94f1fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-632243",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-632243/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-632243",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-632243",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-632243",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "05ca2e4d48491be1eaa6478c08ba1e6eaf14201fea7d8c9fa90e5917b20091d1",
	            "SandboxKey": "/var/run/docker/netns/05ca2e4d4849",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-632243": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a23ed3dab8d4d6fb6f9edc51b6864da564467aa8f10cf2599da81a3bf2593e1",
	                    "EndpointID": "f9f8389deee8e993727438241dc97d39662ef987a5d8350f28ac0dc15449d40e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "86:3a:cf:37:f7:4a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-632243",
	                        "34542347c69b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-632243 -n default-k8s-diff-port-632243
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-632243 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-632243 logs -n 25: (1.199865189s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-628644 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │                     │
	│ ssh     │ -p bridge-628644 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo containerd config dump                                                                                                                                                                                                  │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo crio config                                                                                                                                                                                                             │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p bridge-628644                                                                                                                                                                                                                              │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p disable-driver-mounts-327778                                                                                                                                                                                                               │ disable-driver-mounts-327778 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-680646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p old-k8s-version-680646 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-897274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p no-preload-897274 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-680646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-680646 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ start   │ -p no-preload-897274 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-160987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-632243 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-160987 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:16:33
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:16:33.122678  331191 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:16:33.122964  331191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:16:33.122976  331191 out.go:374] Setting ErrFile to fd 2...
	I1129 09:16:33.122983  331191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:16:33.123284  331191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:16:33.123913  331191 out.go:368] Setting JSON to false
	I1129 09:16:33.125477  331191 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3545,"bootTime":1764404248,"procs":399,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:16:33.125566  331191 start.go:143] virtualization: kvm guest
	I1129 09:16:33.127567  331191 out.go:179] * [no-preload-897274] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:16:33.129322  331191 notify.go:221] Checking for updates...
	I1129 09:16:33.129396  331191 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:16:33.133381  331191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:16:33.135622  331191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:16:33.136799  331191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:16:33.138360  331191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:16:33.141984  331191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:16:33.144186  331191 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:16:33.144997  331191 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:16:33.171095  331191 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:16:33.171229  331191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:16:33.235991  331191 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-29 09:16:33.224229786 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:16:33.236090  331191 docker.go:319] overlay module found
	I1129 09:16:33.238584  331191 out.go:179] * Using the docker driver based on existing profile
	I1129 09:16:33.239751  331191 start.go:309] selected driver: docker
	I1129 09:16:33.239767  331191 start.go:927] validating driver "docker" against &{Name:no-preload-897274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-897274 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:16:33.239938  331191 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:16:33.240643  331191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:16:33.304676  331191 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-29 09:16:33.294656443 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:16:33.304989  331191 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:16:33.305022  331191 cni.go:84] Creating CNI manager for ""
	I1129 09:16:33.305082  331191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:16:33.305121  331191 start.go:353] cluster config:
	{Name:no-preload-897274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-897274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:16:33.306884  331191 out.go:179] * Starting "no-preload-897274" primary control-plane node in "no-preload-897274" cluster
	I1129 09:16:33.308057  331191 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:16:33.309446  331191 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:16:33.310547  331191 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:16:33.310649  331191 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:16:33.310704  331191 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/config.json ...
	I1129 09:16:33.310903  331191 cache.go:107] acquiring lock: {Name:mk8f7573c1bcf364ee3e869844e236299ef911a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.310957  331191 cache.go:107] acquiring lock: {Name:mk3d47c34f6428afe07538d6b2903bd93c895587 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.310990  331191 cache.go:107] acquiring lock: {Name:mkd8c083b40056ddf2bcea6e5d97bd63c854310f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.311021  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1129 09:16:33.310956  331191 cache.go:107] acquiring lock: {Name:mk8aac6c82be99816e28146313299368d69d5087 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.311008  331191 cache.go:107] acquiring lock: {Name:mk422b9f5e82d6d6cab524cfb12c9a0d353a9e30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.311032  331191 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 148.823µs
	I1129 09:16:33.311049  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1129 09:16:33.310911  331191 cache.go:107] acquiring lock: {Name:mk40dde31b69aa254af83ecc3b922eeafac6b928 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.311059  331191 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 75.229µs
	I1129 09:16:33.311068  331191 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1129 09:16:33.311049  331191 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1129 09:16:33.311072  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1129 09:16:33.311085  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1129 09:16:33.311086  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1129 09:16:33.311086  331191 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 130.998µs
	I1129 09:16:33.311093  331191 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 206.951µs
	I1129 09:16:33.311096  331191 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1129 09:16:33.311096  331191 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 163.829µs
	I1129 09:16:33.311101  331191 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1129 09:16:33.311106  331191 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1129 09:16:33.311043  331191 cache.go:107] acquiring lock: {Name:mk1202721af231e365c67615309450a51ff4e3b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.311114  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1129 09:16:33.311122  331191 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 161.828µs
	I1129 09:16:33.311139  331191 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1129 09:16:33.311074  331191 cache.go:107] acquiring lock: {Name:mka04b02303b6e225ac2b476db413ffbfd8b53c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.311197  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1129 09:16:33.311222  331191 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 232.578µs
	I1129 09:16:33.311233  331191 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1129 09:16:33.311259  331191 cache.go:115] /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1129 09:16:33.311275  331191 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 249.776µs
	I1129 09:16:33.311282  331191 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/22000-5652/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1129 09:16:33.311289  331191 cache.go:87] Successfully saved all images to host disk.
	I1129 09:16:33.333608  331191 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:16:33.333631  331191 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:16:33.333651  331191 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:16:33.333689  331191 start.go:360] acquireMachinesLock for no-preload-897274: {Name:mk26d63983c64bd83bbc5a0fb0c10ac2c7be5a49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:16:33.333757  331191 start.go:364] duration metric: took 46.1µs to acquireMachinesLock for "no-preload-897274"
	I1129 09:16:33.333778  331191 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:16:33.333786  331191 fix.go:54] fixHost starting: 
	I1129 09:16:33.334036  331191 cli_runner.go:164] Run: docker container inspect no-preload-897274 --format={{.State.Status}}
	I1129 09:16:33.352834  331191 fix.go:112] recreateIfNeeded on no-preload-897274: state=Stopped err=<nil>
	W1129 09:16:33.352882  331191 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 09:16:32.041544  322024 node_ready.go:49] node "default-k8s-diff-port-632243" is "Ready"
	I1129 09:16:32.041571  322024 node_ready.go:38] duration metric: took 11.003549148s for node "default-k8s-diff-port-632243" to be "Ready" ...
	I1129 09:16:32.041585  322024 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:16:32.041642  322024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:16:32.054761  322024 api_server.go:72] duration metric: took 11.352970675s to wait for apiserver process to appear ...
	I1129 09:16:32.054785  322024 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:16:32.054802  322024 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1129 09:16:32.060196  322024 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1129 09:16:32.061315  322024 api_server.go:141] control plane version: v1.34.1
	I1129 09:16:32.061345  322024 api_server.go:131] duration metric: took 6.553174ms to wait for apiserver health ...
	I1129 09:16:32.061356  322024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:16:32.065191  322024 system_pods.go:59] 8 kube-system pods found
	I1129 09:16:32.065228  322024 system_pods.go:61] "coredns-66bc5c9577-z4m7c" [98358d85-a090-44af-b52c-b5043215489d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:16:32.065236  322024 system_pods.go:61] "etcd-default-k8s-diff-port-632243" [09a34b15-fbfc-4348-90c4-e24e6baf1a19] Running
	I1129 09:16:32.065251  322024 system_pods.go:61] "kindnet-tpstm" [15e600f0-69fa-43be-ad87-07a80e245c73] Running
	I1129 09:16:32.065258  322024 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-632243" [05294706-b493-4660-8b69-19a3686ec539] Running
	I1129 09:16:32.065266  322024 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-632243" [fb12ecb8-1c38-404c-b1f5-c52bd3c76ae3] Running
	I1129 09:16:32.065273  322024 system_pods.go:61] "kube-proxy-p2nf7" [50905f73-5af2-401c-a482-7d68d8d3bdc4] Running
	I1129 09:16:32.065282  322024 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-632243" [31003176-dbcb-4f15-88c6-ea1592ffdf1b] Running
	I1129 09:16:32.065290  322024 system_pods.go:61] "storage-provisioner" [b28962e0-c388-44d7-8e57-e4030e80dabd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:16:32.065303  322024 system_pods.go:74] duration metric: took 3.937854ms to wait for pod list to return data ...
	I1129 09:16:32.065316  322024 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:16:32.067970  322024 default_sa.go:45] found service account: "default"
	I1129 09:16:32.067992  322024 default_sa.go:55] duration metric: took 2.670246ms for default service account to be created ...
	I1129 09:16:32.068003  322024 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:16:32.070916  322024 system_pods.go:86] 8 kube-system pods found
	I1129 09:16:32.070942  322024 system_pods.go:89] "coredns-66bc5c9577-z4m7c" [98358d85-a090-44af-b52c-b5043215489d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:16:32.070948  322024 system_pods.go:89] "etcd-default-k8s-diff-port-632243" [09a34b15-fbfc-4348-90c4-e24e6baf1a19] Running
	I1129 09:16:32.070955  322024 system_pods.go:89] "kindnet-tpstm" [15e600f0-69fa-43be-ad87-07a80e245c73] Running
	I1129 09:16:32.070959  322024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-632243" [05294706-b493-4660-8b69-19a3686ec539] Running
	I1129 09:16:32.070970  322024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-632243" [fb12ecb8-1c38-404c-b1f5-c52bd3c76ae3] Running
	I1129 09:16:32.070976  322024 system_pods.go:89] "kube-proxy-p2nf7" [50905f73-5af2-401c-a482-7d68d8d3bdc4] Running
	I1129 09:16:32.070980  322024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-632243" [31003176-dbcb-4f15-88c6-ea1592ffdf1b] Running
	I1129 09:16:32.070984  322024 system_pods.go:89] "storage-provisioner" [b28962e0-c388-44d7-8e57-e4030e80dabd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:16:32.071010  322024 retry.go:31] will retry after 234.871211ms: missing components: kube-dns
	I1129 09:16:32.310261  322024 system_pods.go:86] 8 kube-system pods found
	I1129 09:16:32.310290  322024 system_pods.go:89] "coredns-66bc5c9577-z4m7c" [98358d85-a090-44af-b52c-b5043215489d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:16:32.310299  322024 system_pods.go:89] "etcd-default-k8s-diff-port-632243" [09a34b15-fbfc-4348-90c4-e24e6baf1a19] Running
	I1129 09:16:32.310305  322024 system_pods.go:89] "kindnet-tpstm" [15e600f0-69fa-43be-ad87-07a80e245c73] Running
	I1129 09:16:32.310310  322024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-632243" [05294706-b493-4660-8b69-19a3686ec539] Running
	I1129 09:16:32.310313  322024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-632243" [fb12ecb8-1c38-404c-b1f5-c52bd3c76ae3] Running
	I1129 09:16:32.310318  322024 system_pods.go:89] "kube-proxy-p2nf7" [50905f73-5af2-401c-a482-7d68d8d3bdc4] Running
	I1129 09:16:32.310321  322024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-632243" [31003176-dbcb-4f15-88c6-ea1592ffdf1b] Running
	I1129 09:16:32.310326  322024 system_pods.go:89] "storage-provisioner" [b28962e0-c388-44d7-8e57-e4030e80dabd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:16:32.310344  322024 retry.go:31] will retry after 262.041893ms: missing components: kube-dns
	I1129 09:16:32.577309  322024 system_pods.go:86] 8 kube-system pods found
	I1129 09:16:32.577355  322024 system_pods.go:89] "coredns-66bc5c9577-z4m7c" [98358d85-a090-44af-b52c-b5043215489d] Running
	I1129 09:16:32.577363  322024 system_pods.go:89] "etcd-default-k8s-diff-port-632243" [09a34b15-fbfc-4348-90c4-e24e6baf1a19] Running
	I1129 09:16:32.577375  322024 system_pods.go:89] "kindnet-tpstm" [15e600f0-69fa-43be-ad87-07a80e245c73] Running
	I1129 09:16:32.577380  322024 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-632243" [05294706-b493-4660-8b69-19a3686ec539] Running
	I1129 09:16:32.577386  322024 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-632243" [fb12ecb8-1c38-404c-b1f5-c52bd3c76ae3] Running
	I1129 09:16:32.577391  322024 system_pods.go:89] "kube-proxy-p2nf7" [50905f73-5af2-401c-a482-7d68d8d3bdc4] Running
	I1129 09:16:32.577397  322024 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-632243" [31003176-dbcb-4f15-88c6-ea1592ffdf1b] Running
	I1129 09:16:32.577414  322024 system_pods.go:89] "storage-provisioner" [b28962e0-c388-44d7-8e57-e4030e80dabd] Running
	I1129 09:16:32.577424  322024 system_pods.go:126] duration metric: took 509.413863ms to wait for k8s-apps to be running ...
	I1129 09:16:32.577433  322024 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:16:32.577486  322024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:16:32.591542  322024 system_svc.go:56] duration metric: took 14.099503ms WaitForService to wait for kubelet
	I1129 09:16:32.591571  322024 kubeadm.go:587] duration metric: took 11.889799292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:16:32.591590  322024 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:16:32.594960  322024 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:16:32.594988  322024 node_conditions.go:123] node cpu capacity is 8
	I1129 09:16:32.595001  322024 node_conditions.go:105] duration metric: took 3.406781ms to run NodePressure ...
	I1129 09:16:32.595016  322024 start.go:242] waiting for startup goroutines ...
	I1129 09:16:32.595025  322024 start.go:247] waiting for cluster config update ...
	I1129 09:16:32.595045  322024 start.go:256] writing updated cluster config ...
	I1129 09:16:32.595334  322024 ssh_runner.go:195] Run: rm -f paused
	I1129 09:16:32.599700  322024 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:16:32.603726  322024 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z4m7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:32.608492  322024 pod_ready.go:94] pod "coredns-66bc5c9577-z4m7c" is "Ready"
	I1129 09:16:32.608525  322024 pod_ready.go:86] duration metric: took 4.774667ms for pod "coredns-66bc5c9577-z4m7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:32.610647  322024 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:32.618009  322024 pod_ready.go:94] pod "etcd-default-k8s-diff-port-632243" is "Ready"
	I1129 09:16:32.618048  322024 pod_ready.go:86] duration metric: took 7.376123ms for pod "etcd-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:32.620590  322024 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:32.625290  322024 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-632243" is "Ready"
	I1129 09:16:32.625320  322024 pod_ready.go:86] duration metric: took 4.703069ms for pod "kube-apiserver-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:32.627886  322024 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:33.004800  322024 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-632243" is "Ready"
	I1129 09:16:33.004836  322024 pod_ready.go:86] duration metric: took 376.921962ms for pod "kube-controller-manager-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:33.204356  322024 pod_ready.go:83] waiting for pod "kube-proxy-p2nf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:33.604207  322024 pod_ready.go:94] pod "kube-proxy-p2nf7" is "Ready"
	I1129 09:16:33.604239  322024 pod_ready.go:86] duration metric: took 399.852528ms for pod "kube-proxy-p2nf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:33.804985  322024 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:34.204711  322024 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-632243" is "Ready"
	I1129 09:16:34.204740  322024 pod_ready.go:86] duration metric: took 399.726671ms for pod "kube-scheduler-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:16:34.204752  322024 pod_ready.go:40] duration metric: took 1.605019532s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:16:34.250891  322024 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:16:34.252792  322024 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-632243" cluster and "default" namespace by default
	I1129 09:16:31.667715  328395 addons.go:530] duration metric: took 3.740311794s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1129 09:16:31.669237  328395 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1129 09:16:31.669299  328395 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1129 09:16:32.165030  328395 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:16:32.169403  328395 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:16:32.170688  328395 api_server.go:141] control plane version: v1.28.0
	I1129 09:16:32.170713  328395 api_server.go:131] duration metric: took 506.706984ms to wait for apiserver health ...
	I1129 09:16:32.170721  328395 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:16:32.174656  328395 system_pods.go:59] 8 kube-system pods found
	I1129 09:16:32.174692  328395 system_pods.go:61] "coredns-5dd5756b68-lwg8c" [34b2ab35-01c8-443b-90eb-b685e98a561b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:16:32.174703  328395 system_pods.go:61] "etcd-old-k8s-version-680646" [76196bbf-d848-4229-bc5a-a643536ce9cf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:16:32.174721  328395 system_pods.go:61] "kindnet-xjmpm" [4c8108ed-0909-4754-ab0e-0d92a16cdeef] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 09:16:32.174732  328395 system_pods.go:61] "kube-apiserver-old-k8s-version-680646" [b8828a68-07a6-4028-9315-ea72656418e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:16:32.174743  328395 system_pods.go:61] "kube-controller-manager-old-k8s-version-680646" [73d8f9bb-055a-404b-b261-38de3be66dbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:16:32.174754  328395 system_pods.go:61] "kube-proxy-plgmf" [2911dadf-509a-47fb-80b1-7bad0dac803f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:16:32.174767  328395 system_pods.go:61] "kube-scheduler-old-k8s-version-680646" [d55c6c54-82fa-4dfc-bd16-473d13fb6004] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:16:32.174781  328395 system_pods.go:61] "storage-provisioner" [11cb0c11-4af9-4cf6-945c-a6dcb390a105] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:16:32.174791  328395 system_pods.go:74] duration metric: took 4.063224ms to wait for pod list to return data ...
	I1129 09:16:32.174805  328395 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:16:32.176909  328395 default_sa.go:45] found service account: "default"
	I1129 09:16:32.176929  328395 default_sa.go:55] duration metric: took 2.117219ms for default service account to be created ...
	I1129 09:16:32.176939  328395 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:16:32.180175  328395 system_pods.go:86] 8 kube-system pods found
	I1129 09:16:32.180203  328395 system_pods.go:89] "coredns-5dd5756b68-lwg8c" [34b2ab35-01c8-443b-90eb-b685e98a561b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:16:32.180214  328395 system_pods.go:89] "etcd-old-k8s-version-680646" [76196bbf-d848-4229-bc5a-a643536ce9cf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:16:32.180224  328395 system_pods.go:89] "kindnet-xjmpm" [4c8108ed-0909-4754-ab0e-0d92a16cdeef] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 09:16:32.180233  328395 system_pods.go:89] "kube-apiserver-old-k8s-version-680646" [b8828a68-07a6-4028-9315-ea72656418e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:16:32.180243  328395 system_pods.go:89] "kube-controller-manager-old-k8s-version-680646" [73d8f9bb-055a-404b-b261-38de3be66dbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:16:32.180256  328395 system_pods.go:89] "kube-proxy-plgmf" [2911dadf-509a-47fb-80b1-7bad0dac803f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:16:32.180278  328395 system_pods.go:89] "kube-scheduler-old-k8s-version-680646" [d55c6c54-82fa-4dfc-bd16-473d13fb6004] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:16:32.180287  328395 system_pods.go:89] "storage-provisioner" [11cb0c11-4af9-4cf6-945c-a6dcb390a105] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:16:32.180297  328395 system_pods.go:126] duration metric: took 3.351229ms to wait for k8s-apps to be running ...
	I1129 09:16:32.180311  328395 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:16:32.180366  328395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:16:32.193765  328395 system_svc.go:56] duration metric: took 13.445261ms WaitForService to wait for kubelet
	I1129 09:16:32.193802  328395 kubeadm.go:587] duration metric: took 4.266436616s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:16:32.193825  328395 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:16:32.196753  328395 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:16:32.196776  328395 node_conditions.go:123] node cpu capacity is 8
	I1129 09:16:32.196791  328395 node_conditions.go:105] duration metric: took 2.960533ms to run NodePressure ...
	I1129 09:16:32.196803  328395 start.go:242] waiting for startup goroutines ...
	I1129 09:16:32.196813  328395 start.go:247] waiting for cluster config update ...
	I1129 09:16:32.196825  328395 start.go:256] writing updated cluster config ...
	I1129 09:16:32.197113  328395 ssh_runner.go:195] Run: rm -f paused
	I1129 09:16:32.201201  328395 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:16:32.205784  328395 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-lwg8c" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:16:34.212642  328395 pod_ready.go:104] pod "coredns-5dd5756b68-lwg8c" is not "Ready", error: <nil>
	I1129 09:16:33.354760  331191 out.go:252] * Restarting existing docker container for "no-preload-897274" ...
	I1129 09:16:33.354860  331191 cli_runner.go:164] Run: docker start no-preload-897274
	I1129 09:16:33.624223  331191 cli_runner.go:164] Run: docker container inspect no-preload-897274 --format={{.State.Status}}
	I1129 09:16:33.645157  331191 kic.go:430] container "no-preload-897274" state is running.
	I1129 09:16:33.645691  331191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-897274
	I1129 09:16:33.665737  331191 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/config.json ...
	I1129 09:16:33.665984  331191 machine.go:94] provisionDockerMachine start ...
	I1129 09:16:33.666057  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:33.686402  331191 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:33.686650  331191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1129 09:16:33.686662  331191 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:16:33.687367  331191 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56400->127.0.0.1:33114: read: connection reset by peer
	I1129 09:16:36.835205  331191 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-897274
	
	I1129 09:16:36.835230  331191 ubuntu.go:182] provisioning hostname "no-preload-897274"
	I1129 09:16:36.835279  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:36.854479  331191 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:36.854694  331191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1129 09:16:36.854706  331191 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-897274 && echo "no-preload-897274" | sudo tee /etc/hostname
	I1129 09:16:37.011492  331191 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-897274
	
	I1129 09:16:37.011594  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:37.032132  331191 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:37.032367  331191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1129 09:16:37.032392  331191 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-897274' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-897274/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-897274' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:16:37.179015  331191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:16:37.179046  331191 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:16:37.179074  331191 ubuntu.go:190] setting up certificates
	I1129 09:16:37.179086  331191 provision.go:84] configureAuth start
	I1129 09:16:37.179151  331191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-897274
	I1129 09:16:37.198996  331191 provision.go:143] copyHostCerts
	I1129 09:16:37.199055  331191 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:16:37.199063  331191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:16:37.199130  331191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:16:37.199243  331191 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:16:37.199252  331191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:16:37.199277  331191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:16:37.199345  331191 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:16:37.199353  331191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:16:37.199375  331191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:16:37.199450  331191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.no-preload-897274 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-897274]
	I1129 09:16:37.400467  331191 provision.go:177] copyRemoteCerts
	I1129 09:16:37.400522  331191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:16:37.400560  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:37.420190  331191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/no-preload-897274/id_rsa Username:docker}
	I1129 09:16:37.524234  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:16:37.544115  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:16:37.563441  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:16:37.582291  331191 provision.go:87] duration metric: took 403.186988ms to configureAuth
	I1129 09:16:37.582321  331191 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:16:37.582571  331191 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:16:37.582697  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:37.601871  331191 main.go:143] libmachine: Using SSH client type: native
	I1129 09:16:37.602123  331191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1129 09:16:37.602158  331191 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:16:37.955218  331191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:16:37.955244  331191 machine.go:97] duration metric: took 4.289245118s to provisionDockerMachine
	I1129 09:16:37.955260  331191 start.go:293] postStartSetup for "no-preload-897274" (driver="docker")
	I1129 09:16:37.955272  331191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:16:37.955350  331191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:16:37.955395  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:37.975484  331191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/no-preload-897274/id_rsa Username:docker}
	I1129 09:16:38.079833  331191 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:16:38.083749  331191 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:16:38.083775  331191 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:16:38.083789  331191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:16:38.083875  331191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:16:38.083978  331191 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:16:38.084107  331191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:16:38.092555  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:16:38.111681  331191 start.go:296] duration metric: took 156.404597ms for postStartSetup
	I1129 09:16:38.111767  331191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:16:38.111814  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:38.131932  331191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/no-preload-897274/id_rsa Username:docker}
	I1129 09:16:38.233206  331191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:16:38.237977  331191 fix.go:56] duration metric: took 4.904178025s for fixHost
	I1129 09:16:38.238000  331191 start.go:83] releasing machines lock for "no-preload-897274", held for 4.904232748s
	I1129 09:16:38.238060  331191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-897274
	I1129 09:16:38.257959  331191 ssh_runner.go:195] Run: cat /version.json
	I1129 09:16:38.258032  331191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:16:38.258096  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:38.258035  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:38.279574  331191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/no-preload-897274/id_rsa Username:docker}
	I1129 09:16:38.279939  331191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/no-preload-897274/id_rsa Username:docker}
	I1129 09:16:38.433984  331191 ssh_runner.go:195] Run: systemctl --version
	I1129 09:16:38.440652  331191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:16:38.476403  331191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:16:38.481199  331191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:16:38.481276  331191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:16:38.489386  331191 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:16:38.489411  331191 start.go:496] detecting cgroup driver to use...
	I1129 09:16:38.489443  331191 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:16:38.489484  331191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:16:38.505200  331191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:16:38.518647  331191 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:16:38.518720  331191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:16:38.533897  331191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:16:38.547412  331191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:16:38.629367  331191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:16:38.708463  331191 docker.go:234] disabling docker service ...
	I1129 09:16:38.708534  331191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:16:38.724572  331191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:16:38.737770  331191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:16:38.823099  331191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:16:38.907650  331191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:16:38.921257  331191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:16:38.937465  331191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:16:38.937528  331191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:38.947545  331191 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:16:38.947620  331191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:38.958460  331191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:38.967937  331191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:38.977748  331191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:16:38.986532  331191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:38.996268  331191 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:39.005517  331191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:16:39.015263  331191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:16:39.024145  331191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:16:39.032514  331191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:16:39.117372  331191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:16:39.268699  331191 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:16:39.268766  331191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:16:39.273304  331191 start.go:564] Will wait 60s for crictl version
	I1129 09:16:39.273361  331191 ssh_runner.go:195] Run: which crictl
	I1129 09:16:39.277381  331191 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:16:39.302873  331191 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:16:39.302976  331191 ssh_runner.go:195] Run: crio --version
	I1129 09:16:39.332229  331191 ssh_runner.go:195] Run: crio --version
	I1129 09:16:39.363237  331191 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1129 09:16:36.712359  328395 pod_ready.go:104] pod "coredns-5dd5756b68-lwg8c" is not "Ready", error: <nil>
	W1129 09:16:39.211811  328395 pod_ready.go:104] pod "coredns-5dd5756b68-lwg8c" is not "Ready", error: <nil>
	I1129 09:16:39.364448  331191 cli_runner.go:164] Run: docker network inspect no-preload-897274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:16:39.383343  331191 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1129 09:16:39.387690  331191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:16:39.398885  331191 kubeadm.go:884] updating cluster {Name:no-preload-897274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-897274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:16:39.398992  331191 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:16:39.399022  331191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:16:39.433362  331191 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:16:39.433387  331191 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:16:39.433396  331191 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1129 09:16:39.433516  331191 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-897274 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-897274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:16:39.433643  331191 ssh_runner.go:195] Run: crio config
	I1129 09:16:39.482584  331191 cni.go:84] Creating CNI manager for ""
	I1129 09:16:39.482608  331191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:16:39.482625  331191 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:16:39.482651  331191 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-897274 NodeName:no-preload-897274 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:16:39.482809  331191 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-897274"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:16:39.482905  331191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:16:39.491848  331191 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:16:39.491930  331191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:16:39.500511  331191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 09:16:39.513978  331191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:16:39.527569  331191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1129 09:16:39.540985  331191 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:16:39.545061  331191 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:16:39.556521  331191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:16:39.638452  331191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:16:39.672589  331191 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274 for IP: 192.168.94.2
	I1129 09:16:39.672616  331191 certs.go:195] generating shared ca certs ...
	I1129 09:16:39.672638  331191 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:39.672802  331191 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:16:39.672864  331191 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:16:39.672875  331191 certs.go:257] generating profile certs ...
	I1129 09:16:39.672955  331191 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/client.key
	I1129 09:16:39.673003  331191 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/apiserver.key.c2e76d87
	I1129 09:16:39.673040  331191 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/proxy-client.key
	I1129 09:16:39.673153  331191 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:16:39.673184  331191 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:16:39.673195  331191 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:16:39.673219  331191 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:16:39.673243  331191 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:16:39.673266  331191 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:16:39.673311  331191 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:16:39.673926  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:16:39.694826  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:16:39.716408  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:16:39.737064  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:16:39.762655  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:16:39.781753  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 09:16:39.800668  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:16:39.819193  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/no-preload-897274/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:16:39.837681  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:16:39.856326  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:16:39.876000  331191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:16:39.894816  331191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:16:39.909444  331191 ssh_runner.go:195] Run: openssl version
	I1129 09:16:39.916448  331191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:16:39.926248  331191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:16:39.930365  331191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:16:39.930419  331191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:16:39.966433  331191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:16:39.975157  331191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:16:39.984273  331191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:39.988577  331191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:39.988637  331191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:16:40.027295  331191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:16:40.037079  331191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:16:40.046476  331191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:16:40.050942  331191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:16:40.051015  331191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:16:40.087521  331191 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:16:40.096305  331191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:16:40.100586  331191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:16:40.137427  331191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:16:40.188216  331191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:16:40.243216  331191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:16:40.303223  331191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:16:40.363689  331191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:16:40.404542  331191 kubeadm.go:401] StartCluster: {Name:no-preload-897274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-897274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:16:40.404619  331191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:16:40.404696  331191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:16:40.436648  331191 cri.go:89] found id: "65cad02ba2a7910bcdcffb28c773b53da4d5023aecfd588deeacf22d8dca4a38"
	I1129 09:16:40.436672  331191 cri.go:89] found id: "ad66b46c591ebaf67ffea99e3f782c8b3c848d695dab97ba85d7b414cf4c3170"
	I1129 09:16:40.436678  331191 cri.go:89] found id: "652695edd3b368ed64211f7ee974fad1ce2be0ae46ac90c153b50e751c36007b"
	I1129 09:16:40.436681  331191 cri.go:89] found id: "aef2b06f8a8ac95a822e5865d9062a9500764f567fb042a1dbeda8630e6e5914"
	I1129 09:16:40.436684  331191 cri.go:89] found id: ""
	I1129 09:16:40.436731  331191 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 09:16:40.450109  331191 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:16:40Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:16:40.450203  331191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:16:40.459516  331191 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:16:40.459543  331191 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:16:40.459611  331191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:16:40.467867  331191 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:16:40.469024  331191 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-897274" does not appear in /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:16:40.469958  331191 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-5652/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-897274" cluster setting kubeconfig missing "no-preload-897274" context setting]
	I1129 09:16:40.471513  331191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:40.473383  331191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:16:40.482322  331191 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1129 09:16:40.482376  331191 kubeadm.go:602] duration metric: took 22.824948ms to restartPrimaryControlPlane
	I1129 09:16:40.482397  331191 kubeadm.go:403] duration metric: took 77.857746ms to StartCluster
	I1129 09:16:40.482424  331191 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:40.482503  331191 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:16:40.484782  331191 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:16:40.485100  331191 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:16:40.485203  331191 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:16:40.485308  331191 addons.go:70] Setting storage-provisioner=true in profile "no-preload-897274"
	I1129 09:16:40.485327  331191 addons.go:70] Setting dashboard=true in profile "no-preload-897274"
	I1129 09:16:40.485335  331191 addons.go:239] Setting addon storage-provisioner=true in "no-preload-897274"
	W1129 09:16:40.485344  331191 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:16:40.485342  331191 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:16:40.485346  331191 addons.go:239] Setting addon dashboard=true in "no-preload-897274"
	I1129 09:16:40.485372  331191 host.go:66] Checking if "no-preload-897274" exists ...
	I1129 09:16:40.485360  331191 addons.go:70] Setting default-storageclass=true in profile "no-preload-897274"
	W1129 09:16:40.485380  331191 addons.go:248] addon dashboard should already be in state true
	I1129 09:16:40.485404  331191 host.go:66] Checking if "no-preload-897274" exists ...
	I1129 09:16:40.485408  331191 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-897274"
	I1129 09:16:40.485736  331191 cli_runner.go:164] Run: docker container inspect no-preload-897274 --format={{.State.Status}}
	I1129 09:16:40.485916  331191 cli_runner.go:164] Run: docker container inspect no-preload-897274 --format={{.State.Status}}
	I1129 09:16:40.485938  331191 cli_runner.go:164] Run: docker container inspect no-preload-897274 --format={{.State.Status}}
	I1129 09:16:40.493922  331191 out.go:179] * Verifying Kubernetes components...
	I1129 09:16:40.497995  331191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:16:40.516019  331191 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:16:40.518690  331191 addons.go:239] Setting addon default-storageclass=true in "no-preload-897274"
	W1129 09:16:40.519019  331191 addons.go:248] addon default-storageclass should already be in state true
	I1129 09:16:40.519084  331191 host.go:66] Checking if "no-preload-897274" exists ...
	I1129 09:16:40.518884  331191 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 09:16:40.518990  331191 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:16:40.519274  331191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:16:40.519332  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:40.519572  331191 cli_runner.go:164] Run: docker container inspect no-preload-897274 --format={{.State.Status}}
	I1129 09:16:40.527026  331191 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 09:16:40.528776  331191 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 09:16:40.528802  331191 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 09:16:40.528911  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:40.560828  331191 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:16:40.561054  331191 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:16:40.561509  331191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:16:40.562911  331191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/no-preload-897274/id_rsa Username:docker}
	I1129 09:16:40.563792  331191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/no-preload-897274/id_rsa Username:docker}
	I1129 09:16:40.588729  331191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/no-preload-897274/id_rsa Username:docker}
	I1129 09:16:40.682148  331191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:16:40.696951  331191 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 09:16:40.696975  331191 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 09:16:40.697343  331191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:16:40.704836  331191 node_ready.go:35] waiting up to 6m0s for node "no-preload-897274" to be "Ready" ...
	I1129 09:16:40.719860  331191 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 09:16:40.720568  331191 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 09:16:40.724128  331191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:16:40.740876  331191 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 09:16:40.740900  331191 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 09:16:40.765806  331191 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 09:16:40.765835  331191 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 09:16:40.795254  331191 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 09:16:40.795287  331191 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 09:16:40.816481  331191 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 09:16:40.816511  331191 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 09:16:40.838916  331191 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 09:16:40.838943  331191 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 09:16:40.857038  331191 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 09:16:40.857066  331191 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 09:16:40.873324  331191 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:16:40.873362  331191 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 09:16:40.890902  331191 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:16:42.298585  331191 node_ready.go:49] node "no-preload-897274" is "Ready"
	I1129 09:16:42.298716  331191 node_ready.go:38] duration metric: took 1.593806227s for node "no-preload-897274" to be "Ready" ...
	I1129 09:16:42.298739  331191 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:16:42.298813  331191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:16:42.982386  331191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.285015803s)
	I1129 09:16:42.982467  331191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.258318978s)
	I1129 09:16:42.982619  331191 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.091665491s)
	I1129 09:16:42.982729  331191 api_server.go:72] duration metric: took 2.497591727s to wait for apiserver process to appear ...
	I1129 09:16:42.982788  331191 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:16:42.982807  331191 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1129 09:16:42.987048  331191 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-897274 addons enable metrics-server
	
	I1129 09:16:42.988236  331191 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:16:42.988262  331191 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:16:42.992640  331191 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1129 09:16:42.995474  331191 addons.go:530] duration metric: took 2.510270244s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	
	
	==> CRI-O <==
	Nov 29 09:16:31 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:31.957592848Z" level=info msg="Started container" PID=1809 containerID=7fc32d36e6a6bda5233dc5ebfc2203511ae9a347c4efc355ff6736bce746a729 description=kube-system/coredns-66bc5c9577-z4m7c/coredns id=e553700e-9cd6-4a80-9ca0-d398c96c009e name=/runtime.v1.RuntimeService/StartContainer sandboxID=f9ab5d0a33f40c1bab2c975502f56a5898b33f0bc555d57eb2e00c01403d1ac2
	Nov 29 09:16:31 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:31.957960666Z" level=info msg="Started container" PID=1806 containerID=4a785d12994a52f38e416bc0880c67d398d76e8428711d5381597cbb3217cc08 description=kube-system/storage-provisioner/storage-provisioner id=dbd82ee1-ac8d-4065-b429-ab431b69ae98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e33fd41b9c4d0c2c64aec1e68b0cc4f2806caed40bab7243dd6f4a46c5c388b0
	Nov 29 09:16:34 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:34.720022399Z" level=info msg="Running pod sandbox: default/busybox/POD" id=594e2c80-e56d-44f7-86d3-59743746ca76 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:16:34 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:34.720105965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:16:34 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:34.726061695Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e45a1fbb9325f1022e1c9642553703ca3246e525ff457a1550fec5c703ad6082 UID:2d48cacb-d056-407e-9a3b-3c0ac0e7456f NetNS:/var/run/netns/4b531ce2-4957-442f-98b5-20950c272cdb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009183c8}] Aliases:map[]}"
	Nov 29 09:16:34 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:34.726094255Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 29 09:16:34 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:34.739037393Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e45a1fbb9325f1022e1c9642553703ca3246e525ff457a1550fec5c703ad6082 UID:2d48cacb-d056-407e-9a3b-3c0ac0e7456f NetNS:/var/run/netns/4b531ce2-4957-442f-98b5-20950c272cdb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009183c8}] Aliases:map[]}"
	Nov 29 09:16:34 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:34.739184162Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 29 09:16:34 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:34.739974731Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 29 09:16:34 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:34.740704828Z" level=info msg="Ran pod sandbox e45a1fbb9325f1022e1c9642553703ca3246e525ff457a1550fec5c703ad6082 with infra container: default/busybox/POD" id=594e2c80-e56d-44f7-86d3-59743746ca76 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:16:34 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:34.741966899Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5c03090a-28af-46c9-8d12-24bfa4fcd978 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:16:34 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:34.742106961Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5c03090a-28af-46c9-8d12-24bfa4fcd978 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:16:34 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:34.742158668Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=5c03090a-28af-46c9-8d12-24bfa4fcd978 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:16:34 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:34.742974526Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=625b364c-06db-4656-af5f-e3450eb4a705 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:16:34 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:34.744769783Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:16:36 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:36.061694791Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=625b364c-06db-4656-af5f-e3450eb4a705 name=/runtime.v1.ImageService/PullImage
	Nov 29 09:16:36 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:36.062517621Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=507358e0-710c-40a1-a551-1e66cd310375 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:16:36 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:36.063953987Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=21b3eaa9-48f3-4280-9968-7eecb472a0ae name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:16:36 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:36.067325152Z" level=info msg="Creating container: default/busybox/busybox" id=20763ff0-eeed-41c1-ab88-539e25fda919 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:16:36 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:36.067479656Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:16:36 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:36.071419212Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:16:36 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:36.071907032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:16:36 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:36.098528654Z" level=info msg="Created container 183bfe867632794029e6802cfbd19c1d81861893a5788224f2a416cd0d528ea3: default/busybox/busybox" id=20763ff0-eeed-41c1-ab88-539e25fda919 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:16:36 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:36.099206984Z" level=info msg="Starting container: 183bfe867632794029e6802cfbd19c1d81861893a5788224f2a416cd0d528ea3" id=421ba182-bc70-4c61-934f-2f2c0f828ce9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:16:36 default-k8s-diff-port-632243 crio[770]: time="2025-11-29T09:16:36.101337544Z" level=info msg="Started container" PID=1890 containerID=183bfe867632794029e6802cfbd19c1d81861893a5788224f2a416cd0d528ea3 description=default/busybox/busybox id=421ba182-bc70-4c61-934f-2f2c0f828ce9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e45a1fbb9325f1022e1c9642553703ca3246e525ff457a1550fec5c703ad6082
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	183bfe8676327       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   e45a1fbb9325f       busybox                                                default
	7fc32d36e6a6b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   f9ab5d0a33f40       coredns-66bc5c9577-z4m7c                               kube-system
	4a785d12994a5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   e33fd41b9c4d0       storage-provisioner                                    kube-system
	0b4e18e3d1615       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   31fb6b5e84846       kindnet-tpstm                                          kube-system
	ab234fcd347d5       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   703c22c7f322a       kube-proxy-p2nf7                                       kube-system
	a5e97906cc584       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   ff63fe2128fdd       kube-controller-manager-default-k8s-diff-port-632243   kube-system
	baeaceec9bfd9       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   1d48910c2c49a       kube-apiserver-default-k8s-diff-port-632243            kube-system
	40d4611426f42       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   42e87d72698aa       etcd-default-k8s-diff-port-632243                      kube-system
	f9e8250ce2d88       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   26666d717132e       kube-scheduler-default-k8s-diff-port-632243            kube-system
	
	
	==> coredns [7fc32d36e6a6bda5233dc5ebfc2203511ae9a347c4efc355ff6736bce746a729] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53259 - 19247 "HINFO IN 7729311781631807402.6032474351949684607. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.085226696s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-632243
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-632243
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=default-k8s-diff-port-632243
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_16_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:16:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-632243
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:16:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:16:31 +0000   Sat, 29 Nov 2025 09:16:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:16:31 +0000   Sat, 29 Nov 2025 09:16:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:16:31 +0000   Sat, 29 Nov 2025 09:16:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:16:31 +0000   Sat, 29 Nov 2025 09:16:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-632243
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                bcdb7d0a-1357-4cf0-985d-43631a533a4d
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-z4m7c                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-632243                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-tpstm                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-632243             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-632243    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-p2nf7                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-632243             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node default-k8s-diff-port-632243 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node default-k8s-diff-port-632243 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node default-k8s-diff-port-632243 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-632243 event: Registered Node default-k8s-diff-port-632243 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-632243 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 e1 87 e4 84 ae 08 06
	[  +0.002640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[ +14.778105] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +13.520989] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 32 aa eb 52 77 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +14.762906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df ae 93 da f6 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[Nov29 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[ +13.297626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	[  +7.390906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 68 f3 3d 3c e9 08 06
	[  +0.000436] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[  +7.145922] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea c3 2f 2a a3 01 08 06
	[  +0.000415] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	
	
	==> etcd [40d4611426f422ee8603134ecbd44c664bfe2040e66a95d85bc51b059b478f0a] <==
	{"level":"warn","ts":"2025-11-29T09:16:11.871279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:11.878302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:11.888035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:11.895643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:11.903762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:11.911353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:11.919043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:11.926632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:11.934804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:11.941932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:11.950976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:11.957447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:11.964110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:11.973675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:11.982319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:11.996091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:12.002823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:12.009479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:12.016024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:12.022771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:12.043186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:12.046783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:12.053613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:12.060341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:12.114318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38178","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:16:44 up 59 min,  0 user,  load average: 4.83, 4.03, 2.48
	Linux default-k8s-diff-port-632243 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0b4e18e3d1615714bc3b6b0b10b5d6c2f31f76d06acd24f4b94e222f427f01de] <==
	I1129 09:16:20.972689       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:16:20.973072       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1129 09:16:20.973248       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:16:20.973273       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:16:20.973303       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:16:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:16:21.196507       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:16:21.196546       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:16:21.196559       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:16:21.290795       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:16:21.490296       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:16:21.490375       1 metrics.go:72] Registering metrics
	I1129 09:16:21.490652       1 controller.go:711] "Syncing nftables rules"
	I1129 09:16:31.199221       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:16:31.199310       1 main.go:301] handling current node
	I1129 09:16:41.195579       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:16:41.195668       1 main.go:301] handling current node
	
	
	==> kube-apiserver [baeaceec9bfd9f0180bfb9d04c1420541dedd3a4b3765e57d7c27b5704dc56c5] <==
	I1129 09:16:12.667198       1 policy_source.go:240] refreshing policies
	I1129 09:16:12.668109       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:16:12.768231       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:16:12.768333       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:16:12.775682       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:16:12.776368       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:16:12.858280       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:16:13.568866       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:16:13.573263       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:16:13.573286       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:16:14.230702       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:16:14.282100       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:16:14.374755       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:16:14.381774       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1129 09:16:14.383051       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:16:14.388028       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:16:14.615611       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:16:15.597885       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:16:15.607698       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:16:15.616646       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:16:20.264118       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1129 09:16:20.363189       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:16:20.515754       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:16:20.521715       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1129 09:16:42.534433       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:38822: use of closed network connection
	
	
	==> kube-controller-manager [a5e97906cc584445c8340a4375ee1dcd27d3281b8a9fe5bc2ebbfb560a2f14c9] <==
	I1129 09:16:19.605542       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:16:19.609633       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 09:16:19.609687       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 09:16:19.610790       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 09:16:19.610911       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 09:16:19.610887       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 09:16:19.610952       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1129 09:16:19.611115       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:16:19.611181       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 09:16:19.611200       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:16:19.611290       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 09:16:19.611296       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:16:19.612020       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:16:19.612032       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 09:16:19.612153       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 09:16:19.612679       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:16:19.618012       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:16:19.619649       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:16:19.628836       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 09:16:19.630252       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:16:19.630911       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 09:16:19.631117       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:16:19.640056       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 09:16:19.655442       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:16:34.563061       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ab234fcd347d59484f2a2e1c5a84f5c4c50bd6a953a6960a774bfc2342ea60af] <==
	I1129 09:16:20.708750       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:16:20.790310       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:16:20.891057       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:16:20.891119       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1129 09:16:20.891205       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:16:20.932315       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:16:20.932475       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:16:20.942722       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:16:20.943414       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:16:20.943499       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:16:20.945620       1 config.go:200] "Starting service config controller"
	I1129 09:16:20.946257       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:16:20.945779       1 config.go:309] "Starting node config controller"
	I1129 09:16:20.945910       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:16:20.946298       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:16:20.946306       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:16:20.946318       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:16:20.945922       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:16:20.946338       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:16:21.046691       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:16:21.046742       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:16:21.046767       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f9e8250ce2d88b76baf297e5c25561d80a0e551d387f7abf247495f662bcf489] <==
	E1129 09:16:12.622404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:16:12.622420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:16:12.622512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:16:12.622495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:16:12.622677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:16:12.622701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:16:12.622925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:16:12.622954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:16:12.622982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:16:12.623065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:16:12.623065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:16:12.623356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:16:12.623373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:16:13.465199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:16:13.530173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:16:13.538573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:16:13.650183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:16:13.703415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:16:13.718696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:16:13.837299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:16:13.894830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:16:13.909522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:16:13.967146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:16:14.102126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1129 09:16:15.818193       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:16:16 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:16.506372    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-632243" podStartSLOduration=1.506317225 podStartE2EDuration="1.506317225s" podCreationTimestamp="2025-11-29 09:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:16.492364662 +0000 UTC m=+1.146260376" watchObservedRunningTime="2025-11-29 09:16:16.506317225 +0000 UTC m=+1.160212953"
	Nov 29 09:16:16 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:16.522537    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-632243" podStartSLOduration=1.522509389 podStartE2EDuration="1.522509389s" podCreationTimestamp="2025-11-29 09:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:16.506243461 +0000 UTC m=+1.160139188" watchObservedRunningTime="2025-11-29 09:16:16.522509389 +0000 UTC m=+1.176405122"
	Nov 29 09:16:16 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:16.540161    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-632243" podStartSLOduration=1.540102715 podStartE2EDuration="1.540102715s" podCreationTimestamp="2025-11-29 09:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:16.522685557 +0000 UTC m=+1.176581271" watchObservedRunningTime="2025-11-29 09:16:16.540102715 +0000 UTC m=+1.193998429"
	Nov 29 09:16:16 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:16.540288    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-632243" podStartSLOduration=1.540281485 podStartE2EDuration="1.540281485s" podCreationTimestamp="2025-11-29 09:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:16.539354831 +0000 UTC m=+1.193250540" watchObservedRunningTime="2025-11-29 09:16:16.540281485 +0000 UTC m=+1.194177214"
	Nov 29 09:16:19 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:19.576403    1296 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:16:19 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:19.577041    1296 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:16:20 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:20.368600    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/15e600f0-69fa-43be-ad87-07a80e245c73-cni-cfg\") pod \"kindnet-tpstm\" (UID: \"15e600f0-69fa-43be-ad87-07a80e245c73\") " pod="kube-system/kindnet-tpstm"
	Nov 29 09:16:20 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:20.368664    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnxft\" (UniqueName: \"kubernetes.io/projected/50905f73-5af2-401c-a482-7d68d8d3bdc4-kube-api-access-lnxft\") pod \"kube-proxy-p2nf7\" (UID: \"50905f73-5af2-401c-a482-7d68d8d3bdc4\") " pod="kube-system/kube-proxy-p2nf7"
	Nov 29 09:16:20 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:20.368695    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15e600f0-69fa-43be-ad87-07a80e245c73-lib-modules\") pod \"kindnet-tpstm\" (UID: \"15e600f0-69fa-43be-ad87-07a80e245c73\") " pod="kube-system/kindnet-tpstm"
	Nov 29 09:16:20 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:20.368779    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50905f73-5af2-401c-a482-7d68d8d3bdc4-kube-proxy\") pod \"kube-proxy-p2nf7\" (UID: \"50905f73-5af2-401c-a482-7d68d8d3bdc4\") " pod="kube-system/kube-proxy-p2nf7"
	Nov 29 09:16:20 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:20.368824    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15e600f0-69fa-43be-ad87-07a80e245c73-xtables-lock\") pod \"kindnet-tpstm\" (UID: \"15e600f0-69fa-43be-ad87-07a80e245c73\") " pod="kube-system/kindnet-tpstm"
	Nov 29 09:16:20 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:20.368897    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlq22\" (UniqueName: \"kubernetes.io/projected/15e600f0-69fa-43be-ad87-07a80e245c73-kube-api-access-rlq22\") pod \"kindnet-tpstm\" (UID: \"15e600f0-69fa-43be-ad87-07a80e245c73\") " pod="kube-system/kindnet-tpstm"
	Nov 29 09:16:20 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:20.368922    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50905f73-5af2-401c-a482-7d68d8d3bdc4-xtables-lock\") pod \"kube-proxy-p2nf7\" (UID: \"50905f73-5af2-401c-a482-7d68d8d3bdc4\") " pod="kube-system/kube-proxy-p2nf7"
	Nov 29 09:16:20 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:20.368943    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50905f73-5af2-401c-a482-7d68d8d3bdc4-lib-modules\") pod \"kube-proxy-p2nf7\" (UID: \"50905f73-5af2-401c-a482-7d68d8d3bdc4\") " pod="kube-system/kube-proxy-p2nf7"
	Nov 29 09:16:21 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:21.492749    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tpstm" podStartSLOduration=1.492729653 podStartE2EDuration="1.492729653s" podCreationTimestamp="2025-11-29 09:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:21.492402501 +0000 UTC m=+6.146298215" watchObservedRunningTime="2025-11-29 09:16:21.492729653 +0000 UTC m=+6.146625365"
	Nov 29 09:16:25 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:25.769300    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p2nf7" podStartSLOduration=5.7692757 podStartE2EDuration="5.7692757s" podCreationTimestamp="2025-11-29 09:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:21.52195736 +0000 UTC m=+6.175853074" watchObservedRunningTime="2025-11-29 09:16:25.7692757 +0000 UTC m=+10.423171414"
	Nov 29 09:16:31 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:31.563587    1296 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:16:31 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:31.651497    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz85r\" (UniqueName: \"kubernetes.io/projected/b28962e0-c388-44d7-8e57-e4030e80dabd-kube-api-access-fz85r\") pod \"storage-provisioner\" (UID: \"b28962e0-c388-44d7-8e57-e4030e80dabd\") " pod="kube-system/storage-provisioner"
	Nov 29 09:16:31 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:31.651544    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98358d85-a090-44af-b52c-b5043215489d-config-volume\") pod \"coredns-66bc5c9577-z4m7c\" (UID: \"98358d85-a090-44af-b52c-b5043215489d\") " pod="kube-system/coredns-66bc5c9577-z4m7c"
	Nov 29 09:16:31 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:31.651566    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b28962e0-c388-44d7-8e57-e4030e80dabd-tmp\") pod \"storage-provisioner\" (UID: \"b28962e0-c388-44d7-8e57-e4030e80dabd\") " pod="kube-system/storage-provisioner"
	Nov 29 09:16:31 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:31.651582    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gttv\" (UniqueName: \"kubernetes.io/projected/98358d85-a090-44af-b52c-b5043215489d-kube-api-access-5gttv\") pod \"coredns-66bc5c9577-z4m7c\" (UID: \"98358d85-a090-44af-b52c-b5043215489d\") " pod="kube-system/coredns-66bc5c9577-z4m7c"
	Nov 29 09:16:32 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:32.522984    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-z4m7c" podStartSLOduration=12.522962984 podStartE2EDuration="12.522962984s" podCreationTimestamp="2025-11-29 09:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:32.522643571 +0000 UTC m=+17.176539285" watchObservedRunningTime="2025-11-29 09:16:32.522962984 +0000 UTC m=+17.176858698"
	Nov 29 09:16:32 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:32.554019    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.553994245 podStartE2EDuration="11.553994245s" podCreationTimestamp="2025-11-29 09:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:16:32.553715406 +0000 UTC m=+17.207611117" watchObservedRunningTime="2025-11-29 09:16:32.553994245 +0000 UTC m=+17.207889964"
	Nov 29 09:16:34 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:34.476265    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dn2x\" (UniqueName: \"kubernetes.io/projected/2d48cacb-d056-407e-9a3b-3c0ac0e7456f-kube-api-access-7dn2x\") pod \"busybox\" (UID: \"2d48cacb-d056-407e-9a3b-3c0ac0e7456f\") " pod="default/busybox"
	Nov 29 09:16:36 default-k8s-diff-port-632243 kubelet[1296]: I1129 09:16:36.529247    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.20829936 podStartE2EDuration="2.529223466s" podCreationTimestamp="2025-11-29 09:16:34 +0000 UTC" firstStartedPulling="2025-11-29 09:16:34.742462681 +0000 UTC m=+19.396358373" lastFinishedPulling="2025-11-29 09:16:36.063386785 +0000 UTC m=+20.717282479" observedRunningTime="2025-11-29 09:16:36.52891896 +0000 UTC m=+21.182814674" watchObservedRunningTime="2025-11-29 09:16:36.529223466 +0000 UTC m=+21.183119182"
	
	
	==> storage-provisioner [4a785d12994a52f38e416bc0880c67d398d76e8428711d5381597cbb3217cc08] <==
	I1129 09:16:31.971516       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:16:31.982636       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:16:31.982700       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:16:31.985435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:31.990613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:16:31.990787       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:16:31.990994       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-632243_d4b18207-061b-47f0-974c-9c9f12972f09!
	I1129 09:16:31.990945       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3d9080e2-1f84-4caa-8750-c2395a4c0f6c", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-632243_d4b18207-061b-47f0-974c-9c9f12972f09 became leader
	W1129 09:16:31.993408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:31.997213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:16:32.091143       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-632243_d4b18207-061b-47f0-974c-9c9f12972f09!
	W1129 09:16:34.000356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:34.006171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:36.009895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:36.016876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:38.020388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:38.025483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:40.029400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:40.033750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:42.037723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:42.041829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:44.048616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:16:44.056716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-632243 -n default-k8s-diff-port-632243
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-632243 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-680646 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-680646 --alsologtostderr -v=1: exit status 80 (1.812811072s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-680646 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:17:23.679550  341652 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:17:23.679830  341652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:23.679858  341652 out.go:374] Setting ErrFile to fd 2...
	I1129 09:17:23.679864  341652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:23.680069  341652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:17:23.680315  341652 out.go:368] Setting JSON to false
	I1129 09:17:23.680339  341652 mustload.go:66] Loading cluster: old-k8s-version-680646
	I1129 09:17:23.680828  341652 config.go:182] Loaded profile config "old-k8s-version-680646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1129 09:17:23.681501  341652 cli_runner.go:164] Run: docker container inspect old-k8s-version-680646 --format={{.State.Status}}
	I1129 09:17:23.700766  341652 host.go:66] Checking if "old-k8s-version-680646" exists ...
	I1129 09:17:23.701116  341652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:23.758918  341652 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-29 09:17:23.748881856 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:23.759558  341652 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-680646 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1129 09:17:23.761603  341652 out.go:179] * Pausing node old-k8s-version-680646 ... 
	I1129 09:17:23.762768  341652 host.go:66] Checking if "old-k8s-version-680646" exists ...
	I1129 09:17:23.763089  341652 ssh_runner.go:195] Run: systemctl --version
	I1129 09:17:23.763130  341652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-680646
	I1129 09:17:23.782049  341652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/old-k8s-version-680646/id_rsa Username:docker}
	I1129 09:17:23.885009  341652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:17:23.909590  341652 pause.go:52] kubelet running: true
	I1129 09:17:23.909657  341652 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:17:24.082957  341652 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:17:24.083050  341652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:17:24.151298  341652 cri.go:89] found id: "c5f2e76d762bbed4aac24938d20ed8ef6bc68a75c8faeace43ca72adfebaa06f"
	I1129 09:17:24.151321  341652 cri.go:89] found id: "f7b192ed98d03e41ad05d96225f71cd6ca9e5e80615108419e5489cfe0ae91e8"
	I1129 09:17:24.151325  341652 cri.go:89] found id: "e9a43784b2c72acefa4683955a09e9ac167529a849f27e92985305377c18378c"
	I1129 09:17:24.151330  341652 cri.go:89] found id: "4eeb9cde84ff02b79f96004411581e8503a4fc89f1155b4646dd015b41a654c5"
	I1129 09:17:24.151335  341652 cri.go:89] found id: "56c6159514de487ff8175db94d66f55079bfff299bcf0181130cc9274ba6fbd4"
	I1129 09:17:24.151341  341652 cri.go:89] found id: "f619cafca5a17742a3c6fba5014451687d7d35e25977a157e5be1c8489be5079"
	I1129 09:17:24.151345  341652 cri.go:89] found id: "fc21916bee97cf411bc0e0fecd6723e2e6882a5a2e9c27cf65544bc90cf2c965"
	I1129 09:17:24.151350  341652 cri.go:89] found id: "30573a3cd5db71fef67e2dd17636eef9fcc8eb82fe36a7ff2ed1d3a6ca9f1919"
	I1129 09:17:24.151355  341652 cri.go:89] found id: "719922d67f4629eeb37ff02ef625a4a45934ecac6b66eb3b61978808b6a57fde"
	I1129 09:17:24.151362  341652 cri.go:89] found id: "e4c8a4234fb3187bd7599e962d6a85a954434a95064cb491b1e31a4ed482fd06"
	I1129 09:17:24.151371  341652 cri.go:89] found id: "61ce0b8ff133dd7770871455d52b8eb5571079a0a2609fadc954e3bec70465cd"
	I1129 09:17:24.151375  341652 cri.go:89] found id: ""
	I1129 09:17:24.151417  341652 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:17:24.164237  341652 retry.go:31] will retry after 259.193518ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:24Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:17:24.423753  341652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:17:24.437913  341652 pause.go:52] kubelet running: false
	I1129 09:17:24.438003  341652 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:17:24.587619  341652 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:17:24.587690  341652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:17:24.657002  341652 cri.go:89] found id: "c5f2e76d762bbed4aac24938d20ed8ef6bc68a75c8faeace43ca72adfebaa06f"
	I1129 09:17:24.657026  341652 cri.go:89] found id: "f7b192ed98d03e41ad05d96225f71cd6ca9e5e80615108419e5489cfe0ae91e8"
	I1129 09:17:24.657032  341652 cri.go:89] found id: "e9a43784b2c72acefa4683955a09e9ac167529a849f27e92985305377c18378c"
	I1129 09:17:24.657038  341652 cri.go:89] found id: "4eeb9cde84ff02b79f96004411581e8503a4fc89f1155b4646dd015b41a654c5"
	I1129 09:17:24.657043  341652 cri.go:89] found id: "56c6159514de487ff8175db94d66f55079bfff299bcf0181130cc9274ba6fbd4"
	I1129 09:17:24.657048  341652 cri.go:89] found id: "f619cafca5a17742a3c6fba5014451687d7d35e25977a157e5be1c8489be5079"
	I1129 09:17:24.657052  341652 cri.go:89] found id: "fc21916bee97cf411bc0e0fecd6723e2e6882a5a2e9c27cf65544bc90cf2c965"
	I1129 09:17:24.657056  341652 cri.go:89] found id: "30573a3cd5db71fef67e2dd17636eef9fcc8eb82fe36a7ff2ed1d3a6ca9f1919"
	I1129 09:17:24.657060  341652 cri.go:89] found id: "719922d67f4629eeb37ff02ef625a4a45934ecac6b66eb3b61978808b6a57fde"
	I1129 09:17:24.657075  341652 cri.go:89] found id: "e4c8a4234fb3187bd7599e962d6a85a954434a95064cb491b1e31a4ed482fd06"
	I1129 09:17:24.657083  341652 cri.go:89] found id: "61ce0b8ff133dd7770871455d52b8eb5571079a0a2609fadc954e3bec70465cd"
	I1129 09:17:24.657087  341652 cri.go:89] found id: ""
	I1129 09:17:24.657136  341652 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:17:24.670095  341652 retry.go:31] will retry after 494.356965ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:24Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:17:25.164763  341652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:17:25.178324  341652 pause.go:52] kubelet running: false
	I1129 09:17:25.178389  341652 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:17:25.340445  341652 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:17:25.340534  341652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:17:25.409516  341652 cri.go:89] found id: "c5f2e76d762bbed4aac24938d20ed8ef6bc68a75c8faeace43ca72adfebaa06f"
	I1129 09:17:25.409540  341652 cri.go:89] found id: "f7b192ed98d03e41ad05d96225f71cd6ca9e5e80615108419e5489cfe0ae91e8"
	I1129 09:17:25.409544  341652 cri.go:89] found id: "e9a43784b2c72acefa4683955a09e9ac167529a849f27e92985305377c18378c"
	I1129 09:17:25.409553  341652 cri.go:89] found id: "4eeb9cde84ff02b79f96004411581e8503a4fc89f1155b4646dd015b41a654c5"
	I1129 09:17:25.409557  341652 cri.go:89] found id: "56c6159514de487ff8175db94d66f55079bfff299bcf0181130cc9274ba6fbd4"
	I1129 09:17:25.409561  341652 cri.go:89] found id: "f619cafca5a17742a3c6fba5014451687d7d35e25977a157e5be1c8489be5079"
	I1129 09:17:25.409563  341652 cri.go:89] found id: "fc21916bee97cf411bc0e0fecd6723e2e6882a5a2e9c27cf65544bc90cf2c965"
	I1129 09:17:25.409566  341652 cri.go:89] found id: "30573a3cd5db71fef67e2dd17636eef9fcc8eb82fe36a7ff2ed1d3a6ca9f1919"
	I1129 09:17:25.409569  341652 cri.go:89] found id: "719922d67f4629eeb37ff02ef625a4a45934ecac6b66eb3b61978808b6a57fde"
	I1129 09:17:25.409576  341652 cri.go:89] found id: "e4c8a4234fb3187bd7599e962d6a85a954434a95064cb491b1e31a4ed482fd06"
	I1129 09:17:25.409579  341652 cri.go:89] found id: "61ce0b8ff133dd7770871455d52b8eb5571079a0a2609fadc954e3bec70465cd"
	I1129 09:17:25.409581  341652 cri.go:89] found id: ""
	I1129 09:17:25.409618  341652 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:17:25.423954  341652 out.go:203] 
	W1129 09:17:25.425139  341652 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:17:25.425162  341652 out.go:285] * 
	* 
	W1129 09:17:25.429066  341652 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:17:25.430576  341652 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-680646 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-680646
helpers_test.go:243: (dbg) docker inspect old-k8s-version-680646:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8",
	        "Created": "2025-11-29T09:15:05.20238369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 328733,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:16:20.785552494Z",
	            "FinishedAt": "2025-11-29T09:16:19.762789264Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8/hostname",
	        "HostsPath": "/var/lib/docker/containers/09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8/hosts",
	        "LogPath": "/var/lib/docker/containers/09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8/09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8-json.log",
	        "Name": "/old-k8s-version-680646",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-680646:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-680646",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8",
	                "LowerDir": "/var/lib/docker/overlay2/968ca6ee81356bbcecebb99911f7a3b0a6f59a701eda8a25aa396e0371a519e5-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/968ca6ee81356bbcecebb99911f7a3b0a6f59a701eda8a25aa396e0371a519e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/968ca6ee81356bbcecebb99911f7a3b0a6f59a701eda8a25aa396e0371a519e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/968ca6ee81356bbcecebb99911f7a3b0a6f59a701eda8a25aa396e0371a519e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-680646",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-680646/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-680646",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-680646",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-680646",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9aa2a0819bb4b637403ff1d301dc250efed36cb3be8c34b124bb6c968ddcdd86",
	            "SandboxKey": "/var/run/docker/netns/9aa2a0819bb4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-680646": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a43c754cd40971db489179630ca1055c6922bb09bc13c0b7b4d8e4460b07cb9b",
	                    "EndpointID": "ec6ae846d56b46e0be2dd84d7fd6dd173a155a1238d66690f2ab03e7fdfb44a1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ba:2e:94:e8:ca:88",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-680646",
	                        "09f4f79f42ba"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680646 -n old-k8s-version-680646
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680646 -n old-k8s-version-680646: exit status 2 (341.229686ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-680646 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-680646 logs -n 25: (1.182549087s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-628644 sudo containerd config dump                                                                                                                                                                                                  │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo crio config                                                                                                                                                                                                             │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p bridge-628644                                                                                                                                                                                                                              │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p disable-driver-mounts-327778                                                                                                                                                                                                               │ disable-driver-mounts-327778 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-680646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p old-k8s-version-680646 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-897274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p no-preload-897274 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-680646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-680646 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p no-preload-897274 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-160987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-632243 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-160987 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ stop    │ -p default-k8s-diff-port-632243 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-160987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-632243 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ image   │ old-k8s-version-680646 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-680646 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:17:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:17:02.516567  336858 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:17:02.516867  336858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:02.516879  336858 out.go:374] Setting ErrFile to fd 2...
	I1129 09:17:02.516885  336858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:02.517202  336858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:17:02.517652  336858 out.go:368] Setting JSON to false
	I1129 09:17:02.519042  336858 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3574,"bootTime":1764404248,"procs":398,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:17:02.519120  336858 start.go:143] virtualization: kvm guest
	I1129 09:17:02.523941  336858 out.go:179] * [default-k8s-diff-port-632243] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:17:02.525532  336858 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:17:02.525531  336858 notify.go:221] Checking for updates...
	I1129 09:17:02.528359  336858 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:17:02.529548  336858 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:02.530740  336858 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:17:02.532045  336858 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:17:02.534230  336858 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:17:02.536057  336858 config.go:182] Loaded profile config "default-k8s-diff-port-632243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:02.536789  336858 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:17:02.563686  336858 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:17:02.563830  336858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:02.624956  336858 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-29 09:17:02.613814827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:02.625128  336858 docker.go:319] overlay module found
	I1129 09:17:02.627889  336858 out.go:179] * Using the docker driver based on existing profile
	I1129 09:17:02.629360  336858 start.go:309] selected driver: docker
	I1129 09:17:02.629383  336858 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-632243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:02.629528  336858 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:17:02.630404  336858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:02.700548  336858 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-29 09:17:02.68823324 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:02.700957  336858 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:17:02.701000  336858 cni.go:84] Creating CNI manager for ""
	I1129 09:17:02.701073  336858 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:02.701133  336858 start.go:353] cluster config:
	{Name:default-k8s-diff-port-632243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:02.703361  336858 out.go:179] * Starting "default-k8s-diff-port-632243" primary control-plane node in "default-k8s-diff-port-632243" cluster
	I1129 09:17:02.705024  336858 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:17:02.706697  336858 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:17:02.708213  336858 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:02.708256  336858 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:17:02.708273  336858 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:17:02.708284  336858 cache.go:65] Caching tarball of preloaded images
	I1129 09:17:02.708534  336858 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:17:02.708554  336858 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:17:02.708687  336858 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/config.json ...
	I1129 09:17:02.732236  336858 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:17:02.732260  336858 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:17:02.732283  336858 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:17:02.732319  336858 start.go:360] acquireMachinesLock for default-k8s-diff-port-632243: {Name:mk4d57d40865f49c5625093aed79ed0eb9003360 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:17:02.732398  336858 start.go:364] duration metric: took 48.489µs to acquireMachinesLock for "default-k8s-diff-port-632243"
	I1129 09:17:02.732422  336858 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:17:02.732429  336858 fix.go:54] fixHost starting: 
	I1129 09:17:02.732726  336858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:17:02.753771  336858 fix.go:112] recreateIfNeeded on default-k8s-diff-port-632243: state=Stopped err=<nil>
	W1129 09:17:02.753806  336858 fix.go:138] unexpected machine state, will restart: <nil>
	W1129 09:17:00.536112  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	W1129 09:17:02.536306  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	W1129 09:17:02.212593  328395 pod_ready.go:104] pod "coredns-5dd5756b68-lwg8c" is not "Ready", error: <nil>
	W1129 09:17:04.711471  328395 pod_ready.go:104] pod "coredns-5dd5756b68-lwg8c" is not "Ready", error: <nil>
	I1129 09:17:02.335347  336547 out.go:252] * Restarting existing docker container for "embed-certs-160987" ...
	I1129 09:17:02.335454  336547 cli_runner.go:164] Run: docker start embed-certs-160987
	I1129 09:17:02.636092  336547 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:17:02.660922  336547 kic.go:430] container "embed-certs-160987" state is running.
	I1129 09:17:02.661621  336547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-160987
	I1129 09:17:02.685105  336547 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/config.json ...
	I1129 09:17:02.685323  336547 machine.go:94] provisionDockerMachine start ...
	I1129 09:17:02.685370  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:02.707931  336547 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:02.708250  336547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1129 09:17:02.708267  336547 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:17:02.708945  336547 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44654->127.0.0.1:33119: read: connection reset by peer
	I1129 09:17:05.856174  336547 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-160987
	
	I1129 09:17:05.856208  336547 ubuntu.go:182] provisioning hostname "embed-certs-160987"
	I1129 09:17:05.856320  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:05.875744  336547 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:05.876079  336547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1129 09:17:05.876103  336547 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-160987 && echo "embed-certs-160987" | sudo tee /etc/hostname
	I1129 09:17:06.032777  336547 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-160987
	
	I1129 09:17:06.032893  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:06.052878  336547 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:06.053113  336547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1129 09:17:06.053137  336547 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-160987' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-160987/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-160987' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:17:06.198498  336547 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:17:06.198524  336547 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:17:06.198564  336547 ubuntu.go:190] setting up certificates
	I1129 09:17:06.198577  336547 provision.go:84] configureAuth start
	I1129 09:17:06.198648  336547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-160987
	I1129 09:17:06.219626  336547 provision.go:143] copyHostCerts
	I1129 09:17:06.219696  336547 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:17:06.219708  336547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:17:06.219789  336547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:17:06.219929  336547 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:17:06.219944  336547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:17:06.219987  336547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:17:06.220054  336547 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:17:06.220068  336547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:17:06.220092  336547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:17:06.220148  336547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.embed-certs-160987 san=[127.0.0.1 192.168.85.2 embed-certs-160987 localhost minikube]
	I1129 09:17:06.270790  336547 provision.go:177] copyRemoteCerts
	I1129 09:17:06.270869  336547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:17:06.270930  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:06.292671  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:06.398202  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:17:06.417390  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:17:06.436495  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:17:06.455822  336547 provision.go:87] duration metric: took 257.228509ms to configureAuth
	I1129 09:17:06.455865  336547 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:17:06.456076  336547 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:06.456197  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:06.476477  336547 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:06.476726  336547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1129 09:17:06.476750  336547 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:17:06.819205  336547 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:17:06.819238  336547 machine.go:97] duration metric: took 4.133904808s to provisionDockerMachine
	I1129 09:17:06.819263  336547 start.go:293] postStartSetup for "embed-certs-160987" (driver="docker")
	I1129 09:17:06.819278  336547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:17:06.819352  336547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:17:06.819407  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:06.840865  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:06.944808  336547 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:17:06.949300  336547 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:17:06.949336  336547 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:17:06.949349  336547 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:17:06.949406  336547 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:17:06.949554  336547 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:17:06.949668  336547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:17:06.958186  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:06.977944  336547 start.go:296] duration metric: took 158.65369ms for postStartSetup
	I1129 09:17:06.978035  336547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:17:06.978090  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:06.998390  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:02.756388  336858 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-632243" ...
	I1129 09:17:02.756503  336858 cli_runner.go:164] Run: docker start default-k8s-diff-port-632243
	I1129 09:17:03.067953  336858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:17:03.088190  336858 kic.go:430] container "default-k8s-diff-port-632243" state is running.
	I1129 09:17:03.088676  336858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-632243
	I1129 09:17:03.108471  336858 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/config.json ...
	I1129 09:17:03.108793  336858 machine.go:94] provisionDockerMachine start ...
	I1129 09:17:03.108902  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:03.129437  336858 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:03.129698  336858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1129 09:17:03.129713  336858 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:17:03.130314  336858 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34754->127.0.0.1:33124: read: connection reset by peer
	I1129 09:17:06.279831  336858 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-632243
	
	I1129 09:17:06.279883  336858 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-632243"
	I1129 09:17:06.279971  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:06.301443  336858 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:06.301714  336858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1129 09:17:06.301730  336858 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-632243 && echo "default-k8s-diff-port-632243" | sudo tee /etc/hostname
	I1129 09:17:06.459726  336858 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-632243
	
	I1129 09:17:06.459823  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:06.481195  336858 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:06.481408  336858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1129 09:17:06.481426  336858 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-632243' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-632243/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-632243' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:17:06.628721  336858 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:17:06.628753  336858 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:17:06.628816  336858 ubuntu.go:190] setting up certificates
	I1129 09:17:06.628836  336858 provision.go:84] configureAuth start
	I1129 09:17:06.628913  336858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-632243
	I1129 09:17:06.648660  336858 provision.go:143] copyHostCerts
	I1129 09:17:06.648735  336858 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:17:06.648748  336858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:17:06.648801  336858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:17:06.648948  336858 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:17:06.648961  336858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:17:06.648987  336858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:17:06.649079  336858 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:17:06.649088  336858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:17:06.649108  336858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:17:06.649158  336858 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-632243 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-632243 localhost minikube]
	I1129 09:17:06.671719  336858 provision.go:177] copyRemoteCerts
	I1129 09:17:06.671792  336858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:17:06.671835  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:06.691597  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:17:06.798451  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:17:06.823352  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1129 09:17:06.844669  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:17:06.864460  336858 provision.go:87] duration metric: took 235.597243ms to configureAuth
	I1129 09:17:06.864493  336858 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:17:06.864679  336858 config.go:182] Loaded profile config "default-k8s-diff-port-632243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:06.864807  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:06.885331  336858 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:06.885563  336858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1129 09:17:06.885598  336858 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:17:07.241084  336858 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:17:07.241113  336858 machine.go:97] duration metric: took 4.132299373s to provisionDockerMachine
	I1129 09:17:07.241127  336858 start.go:293] postStartSetup for "default-k8s-diff-port-632243" (driver="docker")
	I1129 09:17:07.241140  336858 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:17:07.241197  336858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:17:07.241245  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:07.263881  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:17:07.368872  336858 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:17:07.372875  336858 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:17:07.372910  336858 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:17:07.372925  336858 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:17:07.372988  336858 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:17:07.373115  336858 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:17:07.373246  336858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:17:07.382330  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:07.402608  336858 start.go:296] duration metric: took 161.465373ms for postStartSetup
	I1129 09:17:07.402707  336858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:17:07.402757  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:07.423633  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	W1129 09:17:05.035558  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	W1129 09:17:07.035927  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	I1129 09:17:07.099415  336547 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:17:07.104603  336547 fix.go:56] duration metric: took 4.791698243s for fixHost
	I1129 09:17:07.104629  336547 start.go:83] releasing machines lock for "embed-certs-160987", held for 4.791746655s
	I1129 09:17:07.104692  336547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-160987
	I1129 09:17:07.125915  336547 ssh_runner.go:195] Run: cat /version.json
	I1129 09:17:07.125936  336547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:17:07.125975  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:07.125998  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:07.147656  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:07.148010  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:07.310280  336547 ssh_runner.go:195] Run: systemctl --version
	I1129 09:17:07.317395  336547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:17:07.355516  336547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:17:07.361026  336547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:17:07.361114  336547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:17:07.369613  336547 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:17:07.369636  336547 start.go:496] detecting cgroup driver to use...
	I1129 09:17:07.369671  336547 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:17:07.369715  336547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:17:07.385798  336547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:17:07.399325  336547 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:17:07.399395  336547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:17:07.415949  336547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:17:07.430893  336547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:17:07.515136  336547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:17:07.611427  336547 docker.go:234] disabling docker service ...
	I1129 09:17:07.611489  336547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:17:07.627166  336547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:17:07.641111  336547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:17:07.723089  336547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:17:07.818204  336547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:17:07.831025  336547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:17:07.846330  336547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:17:07.846419  336547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:07.856031  336547 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:17:07.856109  336547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:07.866299  336547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:07.875986  336547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:07.886246  336547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:17:07.902869  336547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:07.913342  336547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:07.922537  336547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:07.933484  336547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:17:07.941931  336547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:17:07.950899  336547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:08.049140  336547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:17:08.188232  336547 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:17:08.188306  336547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:17:08.192869  336547 start.go:564] Will wait 60s for crictl version
	I1129 09:17:08.192944  336547 ssh_runner.go:195] Run: which crictl
	I1129 09:17:08.197600  336547 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:17:08.231678  336547 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:17:08.231765  336547 ssh_runner.go:195] Run: crio --version
	I1129 09:17:08.261691  336547 ssh_runner.go:195] Run: crio --version
	I1129 09:17:08.294388  336547 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:17:07.524062  336858 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:17:07.528979  336858 fix.go:56] duration metric: took 4.796544465s for fixHost
	I1129 09:17:07.529007  336858 start.go:83] releasing machines lock for "default-k8s-diff-port-632243", held for 4.796594627s
	I1129 09:17:07.529084  336858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-632243
	I1129 09:17:07.558318  336858 ssh_runner.go:195] Run: cat /version.json
	I1129 09:17:07.558368  336858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:17:07.558379  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:07.558436  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:07.580444  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:17:07.580553  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:17:07.738087  336858 ssh_runner.go:195] Run: systemctl --version
	I1129 09:17:07.745208  336858 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:17:07.787017  336858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:17:07.792279  336858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:17:07.792352  336858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:17:07.800809  336858 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:17:07.800837  336858 start.go:496] detecting cgroup driver to use...
	I1129 09:17:07.800879  336858 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:17:07.800933  336858 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:17:07.816342  336858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:17:07.831044  336858 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:17:07.831097  336858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:17:07.846186  336858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:17:07.860320  336858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:17:07.951414  336858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:17:08.048776  336858 docker.go:234] disabling docker service ...
	I1129 09:17:08.048877  336858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:17:08.065070  336858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:17:08.080000  336858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:17:08.174957  336858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:17:08.265742  336858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:17:08.280261  336858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:17:08.297274  336858 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:17:08.297336  336858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:08.307809  336858 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:17:08.307898  336858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:08.318419  336858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:08.328442  336858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:08.338982  336858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:17:08.348380  336858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:08.360069  336858 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:08.370279  336858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:08.380806  336858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:17:08.389764  336858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:17:08.399350  336858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:08.488928  336858 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:17:08.641882  336858 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:17:08.641962  336858 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:17:08.646164  336858 start.go:564] Will wait 60s for crictl version
	I1129 09:17:08.646231  336858 ssh_runner.go:195] Run: which crictl
	I1129 09:17:08.650908  336858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:17:08.679559  336858 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:17:08.679646  336858 ssh_runner.go:195] Run: crio --version
	I1129 09:17:08.714765  336858 ssh_runner.go:195] Run: crio --version
	I1129 09:17:08.759649  336858 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:17:08.295663  336547 cli_runner.go:164] Run: docker network inspect embed-certs-160987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:17:08.315261  336547 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 09:17:08.319652  336547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:08.331000  336547 kubeadm.go:884] updating cluster {Name:embed-certs-160987 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-160987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:17:08.331176  336547 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:08.331242  336547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:08.369832  336547 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:08.369897  336547 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:17:08.369961  336547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:08.400037  336547 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:08.400061  336547 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:17:08.400071  336547 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1129 09:17:08.400201  336547 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-160987 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-160987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:17:08.400283  336547 ssh_runner.go:195] Run: crio config
	I1129 09:17:08.453899  336547 cni.go:84] Creating CNI manager for ""
	I1129 09:17:08.453939  336547 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:08.453960  336547 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:17:08.453995  336547 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-160987 NodeName:embed-certs-160987 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:17:08.454184  336547 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-160987"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:17:08.454263  336547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:17:08.462902  336547 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:17:08.462984  336547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:17:08.471522  336547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1129 09:17:08.485472  336547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:17:08.499649  336547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1129 09:17:08.515194  336547 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:17:08.519231  336547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:08.530697  336547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:08.626804  336547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:17:08.648449  336547 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987 for IP: 192.168.85.2
	I1129 09:17:08.648474  336547 certs.go:195] generating shared ca certs ...
	I1129 09:17:08.648496  336547 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:08.648684  336547 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:17:08.648741  336547 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:17:08.648753  336547 certs.go:257] generating profile certs ...
	I1129 09:17:08.648878  336547 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/client.key
	I1129 09:17:08.648943  336547 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.key.f7c4ad31
	I1129 09:17:08.648995  336547 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/proxy-client.key
	I1129 09:17:08.649151  336547 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:17:08.649200  336547 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:17:08.649214  336547 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:17:08.649253  336547 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:17:08.649291  336547 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:17:08.649329  336547 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:17:08.649411  336547 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:08.650143  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:17:08.672263  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:17:08.694521  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:17:08.717211  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:17:08.745154  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1129 09:17:08.768657  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:17:08.790607  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:17:08.809578  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:17:08.833173  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:17:08.853546  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:17:08.875724  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:17:08.897703  336547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:17:08.912195  336547 ssh_runner.go:195] Run: openssl version
	I1129 09:17:08.920667  336547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:17:08.930203  336547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:08.934417  336547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:08.934483  336547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:08.972201  336547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:17:08.980892  336547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:17:08.990540  336547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:17:08.994704  336547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:17:08.994760  336547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:17:09.038463  336547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:17:09.047452  336547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:17:09.057168  336547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:17:09.061215  336547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:17:09.061286  336547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:17:09.097414  336547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:17:09.106145  336547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:17:09.110918  336547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:17:09.153234  336547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:17:09.210510  336547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:17:09.279214  336547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:17:09.343494  336547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:17:09.407295  336547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:17:09.449079  336547 kubeadm.go:401] StartCluster: {Name:embed-certs-160987 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-160987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:09.449177  336547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:17:09.449260  336547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:17:09.485089  336547 cri.go:89] found id: "b910bdb65bdedc5ad424106b6aea90fdb221e9c9e03ce5e62c16682d9c219dbf"
	I1129 09:17:09.485114  336547 cri.go:89] found id: "d40c5061382593cad885d4b3c86be7a3641ec567ffe3cb652cfd84dd0c2396bf"
	I1129 09:17:09.485120  336547 cri.go:89] found id: "6ee1a1cef6abf99fe2be4154d33fa7e55335140b3c9fc7c979eabca17e682341"
	I1129 09:17:09.485124  336547 cri.go:89] found id: "062c767d0f027b4b3689a35cad7c6003a28dac146ef6a6e9732382f36ec71ffa"
	I1129 09:17:09.485137  336547 cri.go:89] found id: ""
	I1129 09:17:09.485190  336547 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 09:17:09.498914  336547 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:09Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:17:09.498987  336547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:17:09.508114  336547 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:17:09.508133  336547 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:17:09.508192  336547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:17:09.516864  336547 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:17:09.517689  336547 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-160987" does not appear in /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:09.518218  336547 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-5652/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-160987" cluster setting kubeconfig missing "embed-certs-160987" context setting]
	I1129 09:17:09.519050  336547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:09.520972  336547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:17:09.530092  336547 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1129 09:17:09.530138  336547 kubeadm.go:602] duration metric: took 21.99531ms to restartPrimaryControlPlane
	I1129 09:17:09.530148  336547 kubeadm.go:403] duration metric: took 81.080412ms to StartCluster
	I1129 09:17:09.530167  336547 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:09.530328  336547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:09.532249  336547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:09.532528  336547 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:17:09.532861  336547 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:17:09.532996  336547 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:09.533015  336547 addons.go:70] Setting dashboard=true in profile "embed-certs-160987"
	I1129 09:17:09.533039  336547 addons.go:239] Setting addon dashboard=true in "embed-certs-160987"
	W1129 09:17:09.533048  336547 addons.go:248] addon dashboard should already be in state true
	I1129 09:17:09.533060  336547 addons.go:70] Setting default-storageclass=true in profile "embed-certs-160987"
	I1129 09:17:09.533075  336547 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-160987"
	I1129 09:17:09.533137  336547 host.go:66] Checking if "embed-certs-160987" exists ...
	I1129 09:17:09.533386  336547 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:17:09.533626  336547 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:17:09.533796  336547 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-160987"
	I1129 09:17:09.533855  336547 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-160987"
	W1129 09:17:09.533865  336547 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:17:09.533889  336547 host.go:66] Checking if "embed-certs-160987" exists ...
	I1129 09:17:09.534401  336547 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:17:09.534604  336547 out.go:179] * Verifying Kubernetes components...
	I1129 09:17:09.538689  336547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:09.563549  336547 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:17:09.563552  336547 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 09:17:09.565020  336547 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:17:09.565097  336547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:17:09.565173  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:09.565046  336547 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 09:17:08.761196  336858 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-632243 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:17:08.782404  336858 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1129 09:17:08.787056  336858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:08.798913  336858 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-632243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:17:08.799029  336858 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:08.799079  336858 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:08.837350  336858 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:08.837372  336858 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:17:08.837428  336858 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:08.866420  336858 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:08.866442  336858 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:17:08.866449  336858 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1129 09:17:08.866564  336858 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-632243 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:17:08.866626  336858 ssh_runner.go:195] Run: crio config
	I1129 09:17:08.919714  336858 cni.go:84] Creating CNI manager for ""
	I1129 09:17:08.919737  336858 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:08.919750  336858 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:17:08.919771  336858 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-632243 NodeName:default-k8s-diff-port-632243 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:17:08.919920  336858 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-632243"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:17:08.919985  336858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:17:08.929015  336858 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:17:08.929074  336858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:17:08.937965  336858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1129 09:17:08.952738  336858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:17:08.966732  336858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1129 09:17:08.980728  336858 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:17:08.984827  336858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:08.995990  336858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:09.082661  336858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:17:09.111797  336858 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243 for IP: 192.168.103.2
	I1129 09:17:09.111822  336858 certs.go:195] generating shared ca certs ...
	I1129 09:17:09.111866  336858 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:09.112052  336858 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:17:09.112688  336858 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:17:09.112726  336858 certs.go:257] generating profile certs ...
	I1129 09:17:09.112921  336858 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/client.key
	I1129 09:17:09.113021  336858 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.key.6a7d6562
	I1129 09:17:09.113086  336858 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/proxy-client.key
	I1129 09:17:09.113257  336858 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:17:09.113299  336858 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:17:09.113313  336858 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:17:09.113357  336858 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:17:09.113402  336858 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:17:09.113445  336858 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:17:09.113511  336858 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:09.115190  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:17:09.137644  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:17:09.158988  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:17:09.189279  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:17:09.225046  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1129 09:17:09.258006  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:17:09.287280  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:17:09.321294  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:17:09.355385  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:17:09.382688  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:17:09.410146  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:17:09.430817  336858 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:17:09.447052  336858 ssh_runner.go:195] Run: openssl version
	I1129 09:17:09.455827  336858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:17:09.466702  336858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:17:09.471733  336858 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:17:09.471813  336858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:17:09.512946  336858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:17:09.522854  336858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:17:09.532621  336858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:09.540422  336858 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:09.540571  336858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:09.608410  336858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:17:09.630622  336858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:17:09.646595  336858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:17:09.653373  336858 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:17:09.653440  336858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:17:09.715168  336858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:17:09.728975  336858 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:17:09.735349  336858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:17:09.816627  336858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:17:09.884865  336858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:17:09.953256  336858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:17:10.016619  336858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:17:10.089295  336858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:17:10.150348  336858 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-632243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:10.150515  336858 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:17:10.150606  336858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:17:10.193491  336858 cri.go:89] found id: "b13c8a23740acd98b7a6a7244c86241544729c4895bf870e9bb842604451a0f4"
	I1129 09:17:10.193511  336858 cri.go:89] found id: "2080eaa5b786c79ead07692c870ce9928ace57a47032f699d66882570b205513"
	I1129 09:17:10.193515  336858 cri.go:89] found id: "c75e80b4e2dbb59237ca7e83b6a87a80d377951cce4c561324de39b3ea24a433"
	I1129 09:17:10.193518  336858 cri.go:89] found id: "be8adeee9f904b03165bd07f7f9279fad60f6e70a12d988e651be3f8e0e5974c"
	I1129 09:17:10.193602  336858 cri.go:89] found id: ""
	I1129 09:17:10.193643  336858 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 09:17:10.213937  336858 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:10Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:17:10.214028  336858 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:17:10.232297  336858 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:17:10.232322  336858 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:17:10.232369  336858 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:17:10.243858  336858 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:17:10.245377  336858 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-632243" does not appear in /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:10.246716  336858 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-5652/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-632243" cluster setting kubeconfig missing "default-k8s-diff-port-632243" context setting]
	I1129 09:17:10.248401  336858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:10.251027  336858 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:17:10.263329  336858 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1129 09:17:10.263368  336858 kubeadm.go:602] duration metric: took 31.039015ms to restartPrimaryControlPlane
	I1129 09:17:10.263379  336858 kubeadm.go:403] duration metric: took 113.153865ms to StartCluster
	I1129 09:17:10.263398  336858 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:10.263462  336858 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:10.269465  336858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:10.270140  336858 config.go:182] Loaded profile config "default-k8s-diff-port-632243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:10.270076  336858 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:17:10.270261  336858 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-632243"
	I1129 09:17:10.270287  336858 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-632243"
	W1129 09:17:10.270304  336858 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:17:10.270337  336858 host.go:66] Checking if "default-k8s-diff-port-632243" exists ...
	I1129 09:17:10.270868  336858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:17:10.270921  336858 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-632243"
	I1129 09:17:10.271099  336858 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-632243"
	W1129 09:17:10.271120  336858 addons.go:248] addon dashboard should already be in state true
	I1129 09:17:10.271162  336858 host.go:66] Checking if "default-k8s-diff-port-632243" exists ...
	I1129 09:17:10.271803  336858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:17:10.269903  336858 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:17:10.270943  336858 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-632243"
	I1129 09:17:10.272544  336858 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-632243"
	I1129 09:17:10.272879  336858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:17:10.275285  336858 out.go:179] * Verifying Kubernetes components...
	I1129 09:17:10.276652  336858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:10.307250  336858 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:17:10.308950  336858 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:17:10.308973  336858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:17:10.309046  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:10.315630  336858 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 09:17:10.317010  336858 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1129 09:17:06.712239  328395 pod_ready.go:104] pod "coredns-5dd5756b68-lwg8c" is not "Ready", error: <nil>
	W1129 09:17:08.713985  328395 pod_ready.go:104] pod "coredns-5dd5756b68-lwg8c" is not "Ready", error: <nil>
	I1129 09:17:10.218648  328395 pod_ready.go:94] pod "coredns-5dd5756b68-lwg8c" is "Ready"
	I1129 09:17:10.218682  328395 pod_ready.go:86] duration metric: took 38.012873691s for pod "coredns-5dd5756b68-lwg8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:10.224585  328395 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:10.232120  328395 pod_ready.go:94] pod "etcd-old-k8s-version-680646" is "Ready"
	I1129 09:17:10.232250  328395 pod_ready.go:86] duration metric: took 7.637262ms for pod "etcd-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:10.236139  328395 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:10.242626  328395 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-680646" is "Ready"
	I1129 09:17:10.242729  328395 pod_ready.go:86] duration metric: took 6.562994ms for pod "kube-apiserver-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:10.248098  328395 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:10.412947  328395 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-680646" is "Ready"
	I1129 09:17:10.412986  328395 pod_ready.go:86] duration metric: took 164.851946ms for pod "kube-controller-manager-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:10.611271  328395 pod_ready.go:83] waiting for pod "kube-proxy-plgmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:11.010447  328395 pod_ready.go:94] pod "kube-proxy-plgmf" is "Ready"
	I1129 09:17:11.010483  328395 pod_ready.go:86] duration metric: took 399.180359ms for pod "kube-proxy-plgmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:11.211887  328395 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:11.611656  328395 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-680646" is "Ready"
	I1129 09:17:11.611690  328395 pod_ready.go:86] duration metric: took 399.761614ms for pod "kube-scheduler-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:11.611706  328395 pod_ready.go:40] duration metric: took 39.410470281s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:11.681062  328395 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1129 09:17:11.683136  328395 out.go:203] 
	W1129 09:17:11.684454  328395 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1129 09:17:11.685606  328395 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1129 09:17:11.686829  328395 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-680646" cluster and "default" namespace by default
	I1129 09:17:09.566159  336547 addons.go:239] Setting addon default-storageclass=true in "embed-certs-160987"
	W1129 09:17:09.566185  336547 addons.go:248] addon default-storageclass should already be in state true
	I1129 09:17:09.566212  336547 host.go:66] Checking if "embed-certs-160987" exists ...
	I1129 09:17:09.566696  336547 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:17:09.567447  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 09:17:09.567464  336547 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 09:17:09.567519  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:09.595351  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:09.605909  336547 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:17:09.605948  336547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:17:09.606137  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:09.616139  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:09.635385  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:09.727816  336547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:17:09.749525  336547 node_ready.go:35] waiting up to 6m0s for node "embed-certs-160987" to be "Ready" ...
	I1129 09:17:09.750531  336547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:17:09.766303  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 09:17:09.766333  336547 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 09:17:09.805890  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 09:17:09.805924  336547 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 09:17:09.811927  336547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:17:09.854564  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 09:17:09.854599  336547 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 09:17:09.914496  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 09:17:09.914520  336547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 09:17:09.938527  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 09:17:09.938549  336547 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 09:17:09.962528  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 09:17:09.962577  336547 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 09:17:09.983095  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 09:17:09.983129  336547 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 09:17:10.005127  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 09:17:10.005249  336547 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 09:17:10.036384  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:17:10.036413  336547 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 09:17:10.058137  336547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:17:11.827730  336547 node_ready.go:49] node "embed-certs-160987" is "Ready"
	I1129 09:17:11.827773  336547 node_ready.go:38] duration metric: took 2.078209043s for node "embed-certs-160987" to be "Ready" ...
	I1129 09:17:11.827790  336547 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:17:11.827861  336547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:17:10.320018  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 09:17:10.320041  336858 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 09:17:10.320108  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:10.327689  336858 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-632243"
	W1129 09:17:10.327721  336858 addons.go:248] addon default-storageclass should already be in state true
	I1129 09:17:10.327747  336858 host.go:66] Checking if "default-k8s-diff-port-632243" exists ...
	I1129 09:17:10.328232  336858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:17:10.352961  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:17:10.370131  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:17:10.378023  336858 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:17:10.378057  336858 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:17:10.378127  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:10.418800  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:17:10.488098  336858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:17:10.513724  336858 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-632243" to be "Ready" ...
	I1129 09:17:10.535707  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 09:17:10.535731  336858 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 09:17:10.557565  336858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:17:10.557655  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 09:17:10.557666  336858 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 09:17:10.580874  336858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:17:10.589021  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 09:17:10.589119  336858 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 09:17:10.664545  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 09:17:10.664568  336858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 09:17:10.689645  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 09:17:10.689743  336858 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 09:17:10.715183  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 09:17:10.715208  336858 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 09:17:10.739564  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 09:17:10.739612  336858 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 09:17:10.764832  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 09:17:10.764880  336858 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 09:17:10.786820  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:17:10.786876  336858 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 09:17:10.811112  336858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:17:12.118076  336858 node_ready.go:49] node "default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:12.119135  336858 node_ready.go:38] duration metric: took 1.6053343s for node "default-k8s-diff-port-632243" to be "Ready" ...
	I1129 09:17:12.119742  336858 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:17:12.120049  336858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:17:12.769752  336547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.019178744s)
	I1129 09:17:12.770100  336547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.711923947s)
	I1129 09:17:12.770290  336547 api_server.go:72] duration metric: took 3.237730119s to wait for apiserver process to appear ...
	I1129 09:17:12.770305  336547 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:17:12.770326  336547 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:17:12.769946  336547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.957969562s)
	I1129 09:17:12.774695  336547 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-160987 addons enable metrics-server
	
	I1129 09:17:12.776427  336547 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:17:12.776462  336547 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:17:12.790995  336547 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1129 09:17:12.895120  336858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.337512829s)
	I1129 09:17:12.895216  336858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.314316305s)
	I1129 09:17:12.895574  336858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.084422942s)
	I1129 09:17:12.896112  336858 api_server.go:72] duration metric: took 2.623523551s to wait for apiserver process to appear ...
	I1129 09:17:12.896132  336858 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:17:12.896156  336858 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1129 09:17:12.899702  336858 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-632243 addons enable metrics-server
	
	I1129 09:17:12.903738  336858 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:17:12.903764  336858 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:17:12.906550  336858 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1129 09:17:09.037752  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	W1129 09:17:11.038421  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	I1129 09:17:12.792549  336547 addons.go:530] duration metric: took 3.259698336s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1129 09:17:13.270675  336547 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:17:13.276075  336547 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1129 09:17:13.277477  336547 api_server.go:141] control plane version: v1.34.1
	I1129 09:17:13.277517  336547 api_server.go:131] duration metric: took 507.203499ms to wait for apiserver health ...
	I1129 09:17:13.277529  336547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:17:13.282281  336547 system_pods.go:59] 8 kube-system pods found
	I1129 09:17:13.282329  336547 system_pods.go:61] "coredns-66bc5c9577-ptx67" [3cdde537-5064-49d7-8c8b-367639774c63] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:13.282346  336547 system_pods.go:61] "etcd-embed-certs-160987" [347faf57-8141-49d9-8ef9-6a1b04b8641a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:17:13.282364  336547 system_pods.go:61] "kindnet-cvmj6" [239c4b88-9d52-42da-ae39-5eb83d7d3fd1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 09:17:13.282374  336547 system_pods.go:61] "kube-apiserver-embed-certs-160987" [27540c8b-5b66-40c8-91e4-299a0450fd50] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:17:13.282387  336547 system_pods.go:61] "kube-controller-manager-embed-certs-160987" [33fd03e7-f337-4fee-b783-ffa135030207] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:17:13.282397  336547 system_pods.go:61] "kube-proxy-57l9h" [93cda014-998a-4285-81c6-bead54a287e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:17:13.282408  336547 system_pods.go:61] "kube-scheduler-embed-certs-160987" [98695b36-0694-44ff-a494-f8316190fcad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:17:13.282415  336547 system_pods.go:61] "storage-provisioner" [3e04560b-9e25-4b2e-9f7e-d55b0ae42dbd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:17:13.282425  336547 system_pods.go:74] duration metric: took 4.889359ms to wait for pod list to return data ...
	I1129 09:17:13.282440  336547 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:17:13.285794  336547 default_sa.go:45] found service account: "default"
	I1129 09:17:13.285823  336547 default_sa.go:55] duration metric: took 3.376181ms for default service account to be created ...
	I1129 09:17:13.285835  336547 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:17:13.289736  336547 system_pods.go:86] 8 kube-system pods found
	I1129 09:17:13.289778  336547 system_pods.go:89] "coredns-66bc5c9577-ptx67" [3cdde537-5064-49d7-8c8b-367639774c63] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:13.289789  336547 system_pods.go:89] "etcd-embed-certs-160987" [347faf57-8141-49d9-8ef9-6a1b04b8641a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:17:13.289801  336547 system_pods.go:89] "kindnet-cvmj6" [239c4b88-9d52-42da-ae39-5eb83d7d3fd1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 09:17:13.289813  336547 system_pods.go:89] "kube-apiserver-embed-certs-160987" [27540c8b-5b66-40c8-91e4-299a0450fd50] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:17:13.289828  336547 system_pods.go:89] "kube-controller-manager-embed-certs-160987" [33fd03e7-f337-4fee-b783-ffa135030207] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:17:13.289836  336547 system_pods.go:89] "kube-proxy-57l9h" [93cda014-998a-4285-81c6-bead54a287e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:17:13.289864  336547 system_pods.go:89] "kube-scheduler-embed-certs-160987" [98695b36-0694-44ff-a494-f8316190fcad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:17:13.289872  336547 system_pods.go:89] "storage-provisioner" [3e04560b-9e25-4b2e-9f7e-d55b0ae42dbd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:17:13.289883  336547 system_pods.go:126] duration metric: took 4.009264ms to wait for k8s-apps to be running ...
	I1129 09:17:13.289900  336547 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:17:13.289966  336547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:17:13.307331  336547 system_svc.go:56] duration metric: took 17.413222ms WaitForService to wait for kubelet
	I1129 09:17:13.307366  336547 kubeadm.go:587] duration metric: took 3.7748063s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:17:13.307391  336547 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:17:13.311371  336547 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:17:13.311405  336547 node_conditions.go:123] node cpu capacity is 8
	I1129 09:17:13.311428  336547 node_conditions.go:105] duration metric: took 4.030874ms to run NodePressure ...
	I1129 09:17:13.311443  336547 start.go:242] waiting for startup goroutines ...
	I1129 09:17:13.311458  336547 start.go:247] waiting for cluster config update ...
	I1129 09:17:13.311489  336547 start.go:256] writing updated cluster config ...
	I1129 09:17:13.311895  336547 ssh_runner.go:195] Run: rm -f paused
	I1129 09:17:13.316918  336547 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:13.321892  336547 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ptx67" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:17:15.327300  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	I1129 09:17:12.907626  336858 addons.go:530] duration metric: took 2.637558772s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1129 09:17:13.396730  336858 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1129 09:17:13.401351  336858 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1129 09:17:13.402337  336858 api_server.go:141] control plane version: v1.34.1
	I1129 09:17:13.402363  336858 api_server.go:131] duration metric: took 506.224229ms to wait for apiserver health ...
	I1129 09:17:13.402372  336858 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:17:13.405669  336858 system_pods.go:59] 8 kube-system pods found
	I1129 09:17:13.405707  336858 system_pods.go:61] "coredns-66bc5c9577-z4m7c" [98358d85-a090-44af-b52c-b5043215489d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:13.405715  336858 system_pods.go:61] "etcd-default-k8s-diff-port-632243" [09a34b15-fbfc-4348-90c4-e24e6baf1a19] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:17:13.405720  336858 system_pods.go:61] "kindnet-tpstm" [15e600f0-69fa-43be-ad87-07a80e245c73] Running
	I1129 09:17:13.405727  336858 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-632243" [05294706-b493-4660-8b69-19a3686ec539] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:17:13.405735  336858 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-632243" [fb12ecb8-1c38-404c-b1f5-c52bd3c76ae3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:17:13.405739  336858 system_pods.go:61] "kube-proxy-p2nf7" [50905f73-5af2-401c-a482-7d68d8d3bdc4] Running
	I1129 09:17:13.405744  336858 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-632243" [31003176-dbcb-4f15-88c6-ea1592ffdf1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:17:13.405755  336858 system_pods.go:61] "storage-provisioner" [b28962e0-c388-44d7-8e57-e4030e80dabd] Running
	I1129 09:17:13.405761  336858 system_pods.go:74] duration metric: took 3.383976ms to wait for pod list to return data ...
	I1129 09:17:13.405768  336858 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:17:13.408960  336858 default_sa.go:45] found service account: "default"
	I1129 09:17:13.409075  336858 default_sa.go:55] duration metric: took 3.291457ms for default service account to be created ...
	I1129 09:17:13.409095  336858 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:17:13.412512  336858 system_pods.go:86] 8 kube-system pods found
	I1129 09:17:13.412548  336858 system_pods.go:89] "coredns-66bc5c9577-z4m7c" [98358d85-a090-44af-b52c-b5043215489d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:13.412562  336858 system_pods.go:89] "etcd-default-k8s-diff-port-632243" [09a34b15-fbfc-4348-90c4-e24e6baf1a19] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:17:13.412574  336858 system_pods.go:89] "kindnet-tpstm" [15e600f0-69fa-43be-ad87-07a80e245c73] Running
	I1129 09:17:13.412585  336858 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-632243" [05294706-b493-4660-8b69-19a3686ec539] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:17:13.412596  336858 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-632243" [fb12ecb8-1c38-404c-b1f5-c52bd3c76ae3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:17:13.412600  336858 system_pods.go:89] "kube-proxy-p2nf7" [50905f73-5af2-401c-a482-7d68d8d3bdc4] Running
	I1129 09:17:13.412610  336858 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-632243" [31003176-dbcb-4f15-88c6-ea1592ffdf1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:17:13.412614  336858 system_pods.go:89] "storage-provisioner" [b28962e0-c388-44d7-8e57-e4030e80dabd] Running
	I1129 09:17:13.412622  336858 system_pods.go:126] duration metric: took 3.519364ms to wait for k8s-apps to be running ...
	I1129 09:17:13.412638  336858 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:17:13.412691  336858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:17:13.429973  336858 system_svc.go:56] duration metric: took 17.326281ms WaitForService to wait for kubelet
	I1129 09:17:13.430007  336858 kubeadm.go:587] duration metric: took 3.157528585s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:17:13.430028  336858 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:17:13.433137  336858 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:17:13.433164  336858 node_conditions.go:123] node cpu capacity is 8
	I1129 09:17:13.433177  336858 node_conditions.go:105] duration metric: took 3.143636ms to run NodePressure ...
	I1129 09:17:13.433188  336858 start.go:242] waiting for startup goroutines ...
	I1129 09:17:13.433195  336858 start.go:247] waiting for cluster config update ...
	I1129 09:17:13.433207  336858 start.go:256] writing updated cluster config ...
	I1129 09:17:13.433491  336858 ssh_runner.go:195] Run: rm -f paused
	I1129 09:17:13.437630  336858 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:13.441716  336858 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z4m7c" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:17:15.448064  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:17.449407  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:13.536780  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	W1129 09:17:16.036205  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	W1129 09:17:18.036831  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	I1129 09:17:18.537932  331191 pod_ready.go:94] pod "coredns-66bc5c9577-85hh2" is "Ready"
	I1129 09:17:18.537961  331191 pod_ready.go:86] duration metric: took 34.507701467s for pod "coredns-66bc5c9577-85hh2" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:18.541894  331191 pod_ready.go:83] waiting for pod "etcd-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:18.546925  331191 pod_ready.go:94] pod "etcd-no-preload-897274" is "Ready"
	I1129 09:17:18.546955  331191 pod_ready.go:86] duration metric: took 5.03231ms for pod "etcd-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:18.550208  331191 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:18.555380  331191 pod_ready.go:94] pod "kube-apiserver-no-preload-897274" is "Ready"
	I1129 09:17:18.555410  331191 pod_ready.go:86] duration metric: took 5.173912ms for pod "kube-apiserver-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:18.558304  331191 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:18.735161  331191 pod_ready.go:94] pod "kube-controller-manager-no-preload-897274" is "Ready"
	I1129 09:17:18.735191  331191 pod_ready.go:86] duration metric: took 176.860384ms for pod "kube-controller-manager-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:18.935924  331191 pod_ready.go:83] waiting for pod "kube-proxy-h9zhz" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:19.335548  331191 pod_ready.go:94] pod "kube-proxy-h9zhz" is "Ready"
	I1129 09:17:19.335626  331191 pod_ready.go:86] duration metric: took 399.669ms for pod "kube-proxy-h9zhz" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:19.534980  331191 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:19.935951  331191 pod_ready.go:94] pod "kube-scheduler-no-preload-897274" is "Ready"
	I1129 09:17:19.935986  331191 pod_ready.go:86] duration metric: took 400.979445ms for pod "kube-scheduler-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:19.936003  331191 pod_ready.go:40] duration metric: took 35.910372067s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:19.998301  331191 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:17:20.030076  331191 out.go:179] * Done! kubectl is now configured to use "no-preload-897274" cluster and "default" namespace by default
	W1129 09:17:17.329923  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:19.833757  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:19.450200  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:21.948576  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 29 09:16:49 old-k8s-version-680646 crio[566]: time="2025-11-29T09:16:49.994699634Z" level=info msg="Created container 61ce0b8ff133dd7770871455d52b8eb5571079a0a2609fadc954e3bec70465cd: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mn66t/kubernetes-dashboard" id=ce5f9ca1-b2ba-4fed-91d6-4e711445dbd3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:16:49 old-k8s-version-680646 crio[566]: time="2025-11-29T09:16:49.995310868Z" level=info msg="Starting container: 61ce0b8ff133dd7770871455d52b8eb5571079a0a2609fadc954e3bec70465cd" id=fb7e9cc4-b2e0-4e10-96da-9047ee6678bb name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:16:49 old-k8s-version-680646 crio[566]: time="2025-11-29T09:16:49.997011636Z" level=info msg="Started container" PID=1746 containerID=61ce0b8ff133dd7770871455d52b8eb5571079a0a2609fadc954e3bec70465cd description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mn66t/kubernetes-dashboard id=fb7e9cc4-b2e0-4e10-96da-9047ee6678bb name=/runtime.v1.RuntimeService/StartContainer sandboxID=56bf2e95637320ec269ba0d7e2915a6318c4fc05c684986d39cf38d50a770511
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.36623605Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=97d52a04-14d4-46aa-bbc1-fd61caa52c55 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.367270051Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1a7b4595-59c1-4016-a512-dbc8888deb13 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.368446289Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c098ab86-36b7-4f2a-a2df-ceb05228a93b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.368607507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.373290484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.373554278Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/491028c8497b0ee8af717e4994dbfea4fa278e870319eb4f4752d9d68d653924/merged/etc/passwd: no such file or directory"
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.373585929Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/491028c8497b0ee8af717e4994dbfea4fa278e870319eb4f4752d9d68d653924/merged/etc/group: no such file or directory"
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.373880932Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.409795372Z" level=info msg="Created container c5f2e76d762bbed4aac24938d20ed8ef6bc68a75c8faeace43ca72adfebaa06f: kube-system/storage-provisioner/storage-provisioner" id=c098ab86-36b7-4f2a-a2df-ceb05228a93b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.410623865Z" level=info msg="Starting container: c5f2e76d762bbed4aac24938d20ed8ef6bc68a75c8faeace43ca72adfebaa06f" id=41c210e6-6f95-410f-a826-69f3de7ac5f9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.412511378Z" level=info msg="Started container" PID=1768 containerID=c5f2e76d762bbed4aac24938d20ed8ef6bc68a75c8faeace43ca72adfebaa06f description=kube-system/storage-provisioner/storage-provisioner id=41c210e6-6f95-410f-a826-69f3de7ac5f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9220bd7ef8fc70d03f523d922f4a8ee357b0d1c67fd53cbecbc7ea6f0e1ff11
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.248589002Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a9c52618-016b-4c84-95fe-70adf48227f8 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.250500594Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=da2a4d1c-8caa-46db-8a3a-cd2ab575537d name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.254344589Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv/dashboard-metrics-scraper" id=82c8d848-cc21-406e-8072-6e110d5dcbc4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.254539119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.264597113Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.265354191Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.306445923Z" level=info msg="Created container e4c8a4234fb3187bd7599e962d6a85a954434a95064cb491b1e31a4ed482fd06: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv/dashboard-metrics-scraper" id=82c8d848-cc21-406e-8072-6e110d5dcbc4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.307466682Z" level=info msg="Starting container: e4c8a4234fb3187bd7599e962d6a85a954434a95064cb491b1e31a4ed482fd06" id=8e87c6c5-359b-4452-b089-d3130040b0da name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.310405195Z" level=info msg="Started container" PID=1784 containerID=e4c8a4234fb3187bd7599e962d6a85a954434a95064cb491b1e31a4ed482fd06 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv/dashboard-metrics-scraper id=8e87c6c5-359b-4452-b089-d3130040b0da name=/runtime.v1.RuntimeService/StartContainer sandboxID=e57bda849df2e63f8f8866c8254f09e7b86f9be1891897aef0dc5e274576168e
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.387086537Z" level=info msg="Removing container: 8ce55b594fee130d22c7869b97adc182b5a3fb462788336518e3e0129a8b1a65" id=c00c5af5-bf6d-47bc-9302-473b7a81c2cd name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.403496931Z" level=info msg="Removed container 8ce55b594fee130d22c7869b97adc182b5a3fb462788336518e3e0129a8b1a65: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv/dashboard-metrics-scraper" id=c00c5af5-bf6d-47bc-9302-473b7a81c2cd name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	e4c8a4234fb31       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   e57bda849df2e       dashboard-metrics-scraper-5f989dc9cf-8d8lv       kubernetes-dashboard
	c5f2e76d762bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   c9220bd7ef8fc       storage-provisioner                              kube-system
	61ce0b8ff133d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   56bf2e9563732       kubernetes-dashboard-8694d4445c-mn66t            kubernetes-dashboard
	f7b192ed98d03       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   0d1bb0b0c97de       kindnet-xjmpm                                    kube-system
	e9a43784b2c72       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           54 seconds ago      Running             kube-proxy                  0                   b64f03c38e837       kube-proxy-plgmf                                 kube-system
	4eeb9cde84ff0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   c9220bd7ef8fc       storage-provisioner                              kube-system
	6fc7cfe8003b7       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   f13282378c76d       busybox                                          default
	56c6159514de4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           54 seconds ago      Running             coredns                     0                   f0e707b6e0a25       coredns-5dd5756b68-lwg8c                         kube-system
	f619cafca5a17       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           58 seconds ago      Running             etcd                        0                   f50cb89565963       etcd-old-k8s-version-680646                      kube-system
	fc21916bee97c       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           58 seconds ago      Running             kube-controller-manager     0                   13bee11f22643       kube-controller-manager-old-k8s-version-680646   kube-system
	30573a3cd5db7       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           58 seconds ago      Running             kube-apiserver              0                   0fa354d9ee3c0       kube-apiserver-old-k8s-version-680646            kube-system
	719922d67f462       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           58 seconds ago      Running             kube-scheduler              0                   5b20d26610a4d       kube-scheduler-old-k8s-version-680646            kube-system
	
	
	==> coredns [56c6159514de487ff8175db94d66f55079bfff299bcf0181130cc9274ba6fbd4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59208 - 62803 "HINFO IN 3617868267834132906.4431784462698880449. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030677404s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-680646
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-680646
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=old-k8s-version-680646
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_15_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:15:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-680646
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:17:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:17:01 +0000   Sat, 29 Nov 2025 09:15:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:17:01 +0000   Sat, 29 Nov 2025 09:15:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:17:01 +0000   Sat, 29 Nov 2025 09:15:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:17:01 +0000   Sat, 29 Nov 2025 09:15:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-680646
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                3f6721fd-aca4-48a4-bf5d-00d6fd2bc52a
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-lwg8c                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-old-k8s-version-680646                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m3s
	  kube-system                 kindnet-xjmpm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-680646             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-680646    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-plgmf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-680646             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-8d8lv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-mn66t             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 108s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  Starting                 2m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m8s (x9 over 2m8s)  kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node old-k8s-version-680646 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x7 over 2m8s)  kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m2s                 kubelet          Node old-k8s-version-680646 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s                 kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m2s                 kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s                 node-controller  Node old-k8s-version-680646 event: Registered Node old-k8s-version-680646 in Controller
	  Normal  NodeReady                97s                  kubelet          Node old-k8s-version-680646 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x9 over 59s)    kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node old-k8s-version-680646 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x7 over 59s)    kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                  node-controller  Node old-k8s-version-680646 event: Registered Node old-k8s-version-680646 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 e1 87 e4 84 ae 08 06
	[  +0.002640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[ +14.778105] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +13.520989] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 32 aa eb 52 77 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +14.762906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df ae 93 da f6 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[Nov29 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[ +13.297626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	[  +7.390906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 68 f3 3d 3c e9 08 06
	[  +0.000436] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[  +7.145922] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea c3 2f 2a a3 01 08 06
	[  +0.000415] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	
	
	==> etcd [f619cafca5a17742a3c6fba5014451687d7d35e25977a157e5be1c8489be5079] <==
	{"level":"info","ts":"2025-11-29T09:16:27.814999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-29T09:16:27.815069Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-29T09:16:27.815237Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:16:27.81532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:16:27.818566Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-29T09:16:27.818635Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-29T09:16:27.818988Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-29T09:16:27.818939Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-29T09:16:27.819004Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-29T09:16:29.607738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-29T09:16:29.607786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-29T09:16:29.6078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-29T09:16:29.607812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-29T09:16:29.607817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-29T09:16:29.607825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-29T09:16:29.607833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-29T09:16:29.610145Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-680646 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-29T09:16:29.610238Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T09:16:29.610236Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T09:16:29.610547Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-29T09:16:29.610859Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-29T09:16:29.611924Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-29T09:16:29.611923Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"warn","ts":"2025-11-29T09:17:01.593066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.766777ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356969507594503 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/old-k8s-version-680646\" mod_revision:639 > success:<request_put:<key:\"/registry/leases/kube-node-lease/old-k8s-version-680646\" value_size:514 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/old-k8s-version-680646\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-29T09:17:01.593189Z","caller":"traceutil/trace.go:171","msg":"trace[983389728] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"210.094117ms","start":"2025-11-29T09:17:01.383077Z","end":"2025-11-29T09:17:01.593171Z","steps":["trace[983389728] 'process raft request'  (duration: 93.572287ms)","trace[983389728] 'compare'  (duration: 115.673692ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:17:26 up 59 min,  0 user,  load average: 3.75, 3.86, 2.50
	Linux old-k8s-version-680646 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f7b192ed98d03e41ad05d96225f71cd6ca9e5e80615108419e5489cfe0ae91e8] <==
	I1129 09:16:32.101265       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:16:32.101601       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 09:16:32.101796       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:16:32.101821       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:16:32.114977       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:16:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:16:32.319622       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:16:32.319657       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:16:32.319670       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:16:32.319865       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:16:32.701047       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:16:32.701108       1 metrics.go:72] Registering metrics
	I1129 09:16:32.701189       1 controller.go:711] "Syncing nftables rules"
	I1129 09:16:42.319655       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:16:42.319713       1 main.go:301] handling current node
	I1129 09:16:52.319930       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:16:52.319985       1 main.go:301] handling current node
	I1129 09:17:02.320028       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:17:02.320096       1 main.go:301] handling current node
	I1129 09:17:12.320969       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:17:12.321015       1 main.go:301] handling current node
	I1129 09:17:22.325947       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:17:22.325988       1 main.go:301] handling current node
	
	
	==> kube-apiserver [30573a3cd5db71fef67e2dd17636eef9fcc8eb82fe36a7ff2ed1d3a6ca9f1919] <==
	I1129 09:16:30.592102       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1129 09:16:30.634290       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:16:30.662032       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1129 09:16:30.664395       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1129 09:16:30.664448       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 09:16:30.664448       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1129 09:16:30.664659       1 aggregator.go:166] initial CRD sync complete...
	I1129 09:16:30.664671       1 autoregister_controller.go:141] Starting autoregister controller
	I1129 09:16:30.664692       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 09:16:30.664701       1 cache.go:39] Caches are synced for autoregister controller
	I1129 09:16:30.664731       1 shared_informer.go:318] Caches are synced for configmaps
	I1129 09:16:30.672392       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1129 09:16:30.676763       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1129 09:16:30.676788       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1129 09:16:31.516984       1 controller.go:624] quota admission added evaluator for: namespaces
	I1129 09:16:31.553215       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1129 09:16:31.572488       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:16:31.581038       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:16:31.591614       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:16:31.604458       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1129 09:16:31.645570       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.159.34"}
	I1129 09:16:31.657582       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.144.223"}
	I1129 09:16:43.760836       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:16:43.961333       1 controller.go:624] quota admission added evaluator for: endpoints
	I1129 09:16:44.011788       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fc21916bee97cf411bc0e0fecd6723e2e6882a5a2e9c27cf65544bc90cf2c965] <==
	I1129 09:16:44.071191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="313.585485ms"
	I1129 09:16:44.071334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.332µs"
	I1129 09:16:44.072965       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-mn66t"
	I1129 09:16:44.073084       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-8d8lv"
	I1129 09:16:44.081249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.113103ms"
	I1129 09:16:44.082474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="66.387366ms"
	I1129 09:16:44.088986       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.659818ms"
	I1129 09:16:44.089144       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="101.753µs"
	I1129 09:16:44.089223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="37.217µs"
	I1129 09:16:44.091206       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.678658ms"
	I1129 09:16:44.091332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="72.458µs"
	I1129 09:16:44.095514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="437.42µs"
	I1129 09:16:44.098597       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 09:16:44.098635       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1129 09:16:44.098643       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 09:16:44.108771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.458µs"
	I1129 09:16:47.336099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.719µs"
	I1129 09:16:48.340378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="242.458µs"
	I1129 09:16:49.358756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="116.663µs"
	I1129 09:16:50.359619       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.299923ms"
	I1129 09:16:50.359902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.318µs"
	I1129 09:17:09.405310       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.23µs"
	I1129 09:17:10.207861       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.366963ms"
	I1129 09:17:10.209296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.331µs"
	I1129 09:17:14.395715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.128µs"
	
	
	==> kube-proxy [e9a43784b2c72acefa4683955a09e9ac167529a849f27e92985305377c18378c] <==
	I1129 09:16:31.937701       1 server_others.go:69] "Using iptables proxy"
	I1129 09:16:31.949569       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1129 09:16:31.973084       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:16:31.976314       1 server_others.go:152] "Using iptables Proxier"
	I1129 09:16:31.976354       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1129 09:16:31.976361       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1129 09:16:31.976389       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1129 09:16:31.976678       1 server.go:846] "Version info" version="v1.28.0"
	I1129 09:16:31.976695       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:16:31.977363       1 config.go:188] "Starting service config controller"
	I1129 09:16:31.977391       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1129 09:16:31.977420       1 config.go:97] "Starting endpoint slice config controller"
	I1129 09:16:31.977424       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1129 09:16:31.977464       1 config.go:315] "Starting node config controller"
	I1129 09:16:31.977502       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1129 09:16:32.078322       1 shared_informer.go:318] Caches are synced for node config
	I1129 09:16:32.078332       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1129 09:16:32.078357       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [719922d67f4629eeb37ff02ef625a4a45934ecac6b66eb3b61978808b6a57fde] <==
	I1129 09:16:28.207098       1 serving.go:348] Generated self-signed cert in-memory
	W1129 09:16:30.598540       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 09:16:30.598581       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 09:16:30.598593       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 09:16:30.598601       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 09:16:30.635082       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1129 09:16:30.635116       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:16:30.638900       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:16:30.639171       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1129 09:16:30.640459       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1129 09:16:30.640562       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1129 09:16:30.739700       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 29 09:16:44 old-k8s-version-680646 kubelet[732]: I1129 09:16:44.082889     732 topology_manager.go:215] "Topology Admit Handler" podUID="2da13538-dddd-4c5d-81fd-6f823bb78493" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-8d8lv"
	Nov 29 09:16:44 old-k8s-version-680646 kubelet[732]: I1129 09:16:44.207295     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2da13538-dddd-4c5d-81fd-6f823bb78493-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-8d8lv\" (UID: \"2da13538-dddd-4c5d-81fd-6f823bb78493\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv"
	Nov 29 09:16:44 old-k8s-version-680646 kubelet[732]: I1129 09:16:44.207349     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsmmx\" (UniqueName: \"kubernetes.io/projected/f5d4707e-ce09-4732-98b6-607cdc8bd1ff-kube-api-access-tsmmx\") pod \"kubernetes-dashboard-8694d4445c-mn66t\" (UID: \"f5d4707e-ce09-4732-98b6-607cdc8bd1ff\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mn66t"
	Nov 29 09:16:44 old-k8s-version-680646 kubelet[732]: I1129 09:16:44.207383     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84q69\" (UniqueName: \"kubernetes.io/projected/2da13538-dddd-4c5d-81fd-6f823bb78493-kube-api-access-84q69\") pod \"dashboard-metrics-scraper-5f989dc9cf-8d8lv\" (UID: \"2da13538-dddd-4c5d-81fd-6f823bb78493\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv"
	Nov 29 09:16:44 old-k8s-version-680646 kubelet[732]: I1129 09:16:44.207414     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f5d4707e-ce09-4732-98b6-607cdc8bd1ff-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-mn66t\" (UID: \"f5d4707e-ce09-4732-98b6-607cdc8bd1ff\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mn66t"
	Nov 29 09:16:47 old-k8s-version-680646 kubelet[732]: I1129 09:16:47.319004     732 scope.go:117] "RemoveContainer" containerID="2aa287ae1de387f642913e851b14054905992058adde6eba7a11d78aea48d63a"
	Nov 29 09:16:48 old-k8s-version-680646 kubelet[732]: I1129 09:16:48.324176     732 scope.go:117] "RemoveContainer" containerID="2aa287ae1de387f642913e851b14054905992058adde6eba7a11d78aea48d63a"
	Nov 29 09:16:48 old-k8s-version-680646 kubelet[732]: I1129 09:16:48.324325     732 scope.go:117] "RemoveContainer" containerID="8ce55b594fee130d22c7869b97adc182b5a3fb462788336518e3e0129a8b1a65"
	Nov 29 09:16:48 old-k8s-version-680646 kubelet[732]: E1129 09:16:48.324743     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-8d8lv_kubernetes-dashboard(2da13538-dddd-4c5d-81fd-6f823bb78493)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv" podUID="2da13538-dddd-4c5d-81fd-6f823bb78493"
	Nov 29 09:16:49 old-k8s-version-680646 kubelet[732]: I1129 09:16:49.329554     732 scope.go:117] "RemoveContainer" containerID="8ce55b594fee130d22c7869b97adc182b5a3fb462788336518e3e0129a8b1a65"
	Nov 29 09:16:49 old-k8s-version-680646 kubelet[732]: E1129 09:16:49.329979     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-8d8lv_kubernetes-dashboard(2da13538-dddd-4c5d-81fd-6f823bb78493)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv" podUID="2da13538-dddd-4c5d-81fd-6f823bb78493"
	Nov 29 09:16:50 old-k8s-version-680646 kubelet[732]: I1129 09:16:50.350522     732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mn66t" podStartSLOduration=0.806896352 podCreationTimestamp="2025-11-29 09:16:44 +0000 UTC" firstStartedPulling="2025-11-29 09:16:44.415668875 +0000 UTC m=+17.265995583" lastFinishedPulling="2025-11-29 09:16:49.959231741 +0000 UTC m=+22.809558454" observedRunningTime="2025-11-29 09:16:50.350172593 +0000 UTC m=+23.200499312" watchObservedRunningTime="2025-11-29 09:16:50.350459223 +0000 UTC m=+23.200785942"
	Nov 29 09:16:54 old-k8s-version-680646 kubelet[732]: I1129 09:16:54.385048     732 scope.go:117] "RemoveContainer" containerID="8ce55b594fee130d22c7869b97adc182b5a3fb462788336518e3e0129a8b1a65"
	Nov 29 09:16:54 old-k8s-version-680646 kubelet[732]: E1129 09:16:54.385318     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-8d8lv_kubernetes-dashboard(2da13538-dddd-4c5d-81fd-6f823bb78493)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv" podUID="2da13538-dddd-4c5d-81fd-6f823bb78493"
	Nov 29 09:17:02 old-k8s-version-680646 kubelet[732]: I1129 09:17:02.365623     732 scope.go:117] "RemoveContainer" containerID="4eeb9cde84ff02b79f96004411581e8503a4fc89f1155b4646dd015b41a654c5"
	Nov 29 09:17:09 old-k8s-version-680646 kubelet[732]: I1129 09:17:09.247447     732 scope.go:117] "RemoveContainer" containerID="8ce55b594fee130d22c7869b97adc182b5a3fb462788336518e3e0129a8b1a65"
	Nov 29 09:17:09 old-k8s-version-680646 kubelet[732]: I1129 09:17:09.385385     732 scope.go:117] "RemoveContainer" containerID="8ce55b594fee130d22c7869b97adc182b5a3fb462788336518e3e0129a8b1a65"
	Nov 29 09:17:09 old-k8s-version-680646 kubelet[732]: I1129 09:17:09.386186     732 scope.go:117] "RemoveContainer" containerID="e4c8a4234fb3187bd7599e962d6a85a954434a95064cb491b1e31a4ed482fd06"
	Nov 29 09:17:09 old-k8s-version-680646 kubelet[732]: E1129 09:17:09.387714     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-8d8lv_kubernetes-dashboard(2da13538-dddd-4c5d-81fd-6f823bb78493)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv" podUID="2da13538-dddd-4c5d-81fd-6f823bb78493"
	Nov 29 09:17:14 old-k8s-version-680646 kubelet[732]: I1129 09:17:14.384968     732 scope.go:117] "RemoveContainer" containerID="e4c8a4234fb3187bd7599e962d6a85a954434a95064cb491b1e31a4ed482fd06"
	Nov 29 09:17:14 old-k8s-version-680646 kubelet[732]: E1129 09:17:14.385357     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-8d8lv_kubernetes-dashboard(2da13538-dddd-4c5d-81fd-6f823bb78493)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv" podUID="2da13538-dddd-4c5d-81fd-6f823bb78493"
	Nov 29 09:17:24 old-k8s-version-680646 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 09:17:24 old-k8s-version-680646 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 09:17:24 old-k8s-version-680646 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 29 09:17:24 old-k8s-version-680646 systemd[1]: kubelet.service: Consumed 1.664s CPU time.
	
	
	==> kubernetes-dashboard [61ce0b8ff133dd7770871455d52b8eb5571079a0a2609fadc954e3bec70465cd] <==
	2025/11/29 09:16:50 Starting overwatch
	2025/11/29 09:16:50 Using namespace: kubernetes-dashboard
	2025/11/29 09:16:50 Using in-cluster config to connect to apiserver
	2025/11/29 09:16:50 Using secret token for csrf signing
	2025/11/29 09:16:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 09:16:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 09:16:50 Successful initial request to the apiserver, version: v1.28.0
	2025/11/29 09:16:50 Generating JWE encryption key
	2025/11/29 09:16:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 09:16:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 09:16:50 Initializing JWE encryption key from synchronized object
	2025/11/29 09:16:50 Creating in-cluster Sidecar client
	2025/11/29 09:16:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 09:16:50 Serving insecurely on HTTP port: 9090
	2025/11/29 09:17:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4eeb9cde84ff02b79f96004411581e8503a4fc89f1155b4646dd015b41a654c5] <==
	I1129 09:16:31.890330       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 09:17:01.894279       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c5f2e76d762bbed4aac24938d20ed8ef6bc68a75c8faeace43ca72adfebaa06f] <==
	I1129 09:17:02.425744       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:17:02.435425       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:17:02.435572       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1129 09:17:19.840571       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:17:19.840928       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"20fa7a7e-0f59-4be3-9c60-c5917e942d20", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-680646_93874ddc-56de-474e-8750-1fb398bcee7e became leader
	I1129 09:17:19.840971       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-680646_93874ddc-56de-474e-8750-1fb398bcee7e!
	I1129 09:17:19.942063       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-680646_93874ddc-56de-474e-8750-1fb398bcee7e!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-680646 -n old-k8s-version-680646
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-680646 -n old-k8s-version-680646: exit status 2 (352.885387ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-680646 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-680646
helpers_test.go:243: (dbg) docker inspect old-k8s-version-680646:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8",
	        "Created": "2025-11-29T09:15:05.20238369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 328733,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:16:20.785552494Z",
	            "FinishedAt": "2025-11-29T09:16:19.762789264Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8/hostname",
	        "HostsPath": "/var/lib/docker/containers/09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8/hosts",
	        "LogPath": "/var/lib/docker/containers/09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8/09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8-json.log",
	        "Name": "/old-k8s-version-680646",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-680646:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-680646",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "09f4f79f42ba3a1cca8b8af1853d931e4dbcad3c3c7527a57c7e84f8c2ac2ab8",
	                "LowerDir": "/var/lib/docker/overlay2/968ca6ee81356bbcecebb99911f7a3b0a6f59a701eda8a25aa396e0371a519e5-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/968ca6ee81356bbcecebb99911f7a3b0a6f59a701eda8a25aa396e0371a519e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/968ca6ee81356bbcecebb99911f7a3b0a6f59a701eda8a25aa396e0371a519e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/968ca6ee81356bbcecebb99911f7a3b0a6f59a701eda8a25aa396e0371a519e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-680646",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-680646/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-680646",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-680646",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-680646",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9aa2a0819bb4b637403ff1d301dc250efed36cb3be8c34b124bb6c968ddcdd86",
	            "SandboxKey": "/var/run/docker/netns/9aa2a0819bb4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-680646": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a43c754cd40971db489179630ca1055c6922bb09bc13c0b7b4d8e4460b07cb9b",
	                    "EndpointID": "ec6ae846d56b46e0be2dd84d7fd6dd173a155a1238d66690f2ab03e7fdfb44a1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ba:2e:94:e8:ca:88",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-680646",
	                        "09f4f79f42ba"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680646 -n old-k8s-version-680646
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680646 -n old-k8s-version-680646: exit status 2 (342.974937ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-680646 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-680646 logs -n 25: (1.152900176s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-628644 sudo containerd config dump                                                                                                                                                                                                  │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ ssh     │ -p bridge-628644 sudo crio config                                                                                                                                                                                                             │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p bridge-628644                                                                                                                                                                                                                              │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p disable-driver-mounts-327778                                                                                                                                                                                                               │ disable-driver-mounts-327778 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-680646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p old-k8s-version-680646 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-897274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p no-preload-897274 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-680646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-680646 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p no-preload-897274 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-160987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-632243 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-160987 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ stop    │ -p default-k8s-diff-port-632243 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-160987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-632243 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ image   │ old-k8s-version-680646 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-680646 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:17:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:17:02.516567  336858 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:17:02.516867  336858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:02.516879  336858 out.go:374] Setting ErrFile to fd 2...
	I1129 09:17:02.516885  336858 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:02.517202  336858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:17:02.517652  336858 out.go:368] Setting JSON to false
	I1129 09:17:02.519042  336858 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3574,"bootTime":1764404248,"procs":398,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:17:02.519120  336858 start.go:143] virtualization: kvm guest
	I1129 09:17:02.523941  336858 out.go:179] * [default-k8s-diff-port-632243] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:17:02.525532  336858 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:17:02.525531  336858 notify.go:221] Checking for updates...
	I1129 09:17:02.528359  336858 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:17:02.529548  336858 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:02.530740  336858 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:17:02.532045  336858 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:17:02.534230  336858 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:17:02.536057  336858 config.go:182] Loaded profile config "default-k8s-diff-port-632243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:02.536789  336858 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:17:02.563686  336858 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:17:02.563830  336858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:02.624956  336858 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-29 09:17:02.613814827 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:02.625128  336858 docker.go:319] overlay module found
	I1129 09:17:02.627889  336858 out.go:179] * Using the docker driver based on existing profile
	I1129 09:17:02.629360  336858 start.go:309] selected driver: docker
	I1129 09:17:02.629383  336858 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-632243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:02.629528  336858 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:17:02.630404  336858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:02.700548  336858 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-29 09:17:02.68823324 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:02.700957  336858 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:17:02.701000  336858 cni.go:84] Creating CNI manager for ""
	I1129 09:17:02.701073  336858 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:02.701133  336858 start.go:353] cluster config:
	{Name:default-k8s-diff-port-632243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:02.703361  336858 out.go:179] * Starting "default-k8s-diff-port-632243" primary control-plane node in "default-k8s-diff-port-632243" cluster
	I1129 09:17:02.705024  336858 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:17:02.706697  336858 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:17:02.708213  336858 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:02.708256  336858 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:17:02.708273  336858 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:17:02.708284  336858 cache.go:65] Caching tarball of preloaded images
	I1129 09:17:02.708534  336858 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:17:02.708554  336858 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:17:02.708687  336858 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/config.json ...
	I1129 09:17:02.732236  336858 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:17:02.732260  336858 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:17:02.732283  336858 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:17:02.732319  336858 start.go:360] acquireMachinesLock for default-k8s-diff-port-632243: {Name:mk4d57d40865f49c5625093aed79ed0eb9003360 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:17:02.732398  336858 start.go:364] duration metric: took 48.489µs to acquireMachinesLock for "default-k8s-diff-port-632243"
	I1129 09:17:02.732422  336858 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:17:02.732429  336858 fix.go:54] fixHost starting: 
	I1129 09:17:02.732726  336858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:17:02.753771  336858 fix.go:112] recreateIfNeeded on default-k8s-diff-port-632243: state=Stopped err=<nil>
	W1129 09:17:02.753806  336858 fix.go:138] unexpected machine state, will restart: <nil>
	W1129 09:17:00.536112  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	W1129 09:17:02.536306  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	W1129 09:17:02.212593  328395 pod_ready.go:104] pod "coredns-5dd5756b68-lwg8c" is not "Ready", error: <nil>
	W1129 09:17:04.711471  328395 pod_ready.go:104] pod "coredns-5dd5756b68-lwg8c" is not "Ready", error: <nil>
	I1129 09:17:02.335347  336547 out.go:252] * Restarting existing docker container for "embed-certs-160987" ...
	I1129 09:17:02.335454  336547 cli_runner.go:164] Run: docker start embed-certs-160987
	I1129 09:17:02.636092  336547 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:17:02.660922  336547 kic.go:430] container "embed-certs-160987" state is running.
	I1129 09:17:02.661621  336547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-160987
	I1129 09:17:02.685105  336547 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/config.json ...
	I1129 09:17:02.685323  336547 machine.go:94] provisionDockerMachine start ...
	I1129 09:17:02.685370  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:02.707931  336547 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:02.708250  336547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1129 09:17:02.708267  336547 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:17:02.708945  336547 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44654->127.0.0.1:33119: read: connection reset by peer
	I1129 09:17:05.856174  336547 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-160987
	
	I1129 09:17:05.856208  336547 ubuntu.go:182] provisioning hostname "embed-certs-160987"
	I1129 09:17:05.856320  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:05.875744  336547 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:05.876079  336547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1129 09:17:05.876103  336547 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-160987 && echo "embed-certs-160987" | sudo tee /etc/hostname
	I1129 09:17:06.032777  336547 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-160987
	
	I1129 09:17:06.032893  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:06.052878  336547 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:06.053113  336547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1129 09:17:06.053137  336547 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-160987' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-160987/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-160987' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:17:06.198498  336547 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:17:06.198524  336547 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:17:06.198564  336547 ubuntu.go:190] setting up certificates
	I1129 09:17:06.198577  336547 provision.go:84] configureAuth start
	I1129 09:17:06.198648  336547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-160987
	I1129 09:17:06.219626  336547 provision.go:143] copyHostCerts
	I1129 09:17:06.219696  336547 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:17:06.219708  336547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:17:06.219789  336547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:17:06.219929  336547 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:17:06.219944  336547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:17:06.219987  336547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:17:06.220054  336547 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:17:06.220068  336547 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:17:06.220092  336547 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:17:06.220148  336547 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.embed-certs-160987 san=[127.0.0.1 192.168.85.2 embed-certs-160987 localhost minikube]
	I1129 09:17:06.270790  336547 provision.go:177] copyRemoteCerts
	I1129 09:17:06.270869  336547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:17:06.270930  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:06.292671  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:06.398202  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:17:06.417390  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:17:06.436495  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:17:06.455822  336547 provision.go:87] duration metric: took 257.228509ms to configureAuth
	I1129 09:17:06.455865  336547 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:17:06.456076  336547 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:06.456197  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:06.476477  336547 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:06.476726  336547 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1129 09:17:06.476750  336547 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:17:06.819205  336547 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:17:06.819238  336547 machine.go:97] duration metric: took 4.133904808s to provisionDockerMachine
	I1129 09:17:06.819263  336547 start.go:293] postStartSetup for "embed-certs-160987" (driver="docker")
	I1129 09:17:06.819278  336547 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:17:06.819352  336547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:17:06.819407  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:06.840865  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:06.944808  336547 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:17:06.949300  336547 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:17:06.949336  336547 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:17:06.949349  336547 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:17:06.949406  336547 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:17:06.949554  336547 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:17:06.949668  336547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:17:06.958186  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:06.977944  336547 start.go:296] duration metric: took 158.65369ms for postStartSetup
	I1129 09:17:06.978035  336547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:17:06.978090  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:06.998390  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:02.756388  336858 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-632243" ...
	I1129 09:17:02.756503  336858 cli_runner.go:164] Run: docker start default-k8s-diff-port-632243
	I1129 09:17:03.067953  336858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:17:03.088190  336858 kic.go:430] container "default-k8s-diff-port-632243" state is running.
	I1129 09:17:03.088676  336858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-632243
	I1129 09:17:03.108471  336858 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/config.json ...
	I1129 09:17:03.108793  336858 machine.go:94] provisionDockerMachine start ...
	I1129 09:17:03.108902  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:03.129437  336858 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:03.129698  336858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1129 09:17:03.129713  336858 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:17:03.130314  336858 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34754->127.0.0.1:33124: read: connection reset by peer
	I1129 09:17:06.279831  336858 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-632243
	
	I1129 09:17:06.279883  336858 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-632243"
	I1129 09:17:06.279971  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:06.301443  336858 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:06.301714  336858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1129 09:17:06.301730  336858 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-632243 && echo "default-k8s-diff-port-632243" | sudo tee /etc/hostname
	I1129 09:17:06.459726  336858 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-632243
	
	I1129 09:17:06.459823  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:06.481195  336858 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:06.481408  336858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1129 09:17:06.481426  336858 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-632243' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-632243/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-632243' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:17:06.628721  336858 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:17:06.628753  336858 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:17:06.628816  336858 ubuntu.go:190] setting up certificates
	I1129 09:17:06.628836  336858 provision.go:84] configureAuth start
	I1129 09:17:06.628913  336858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-632243
	I1129 09:17:06.648660  336858 provision.go:143] copyHostCerts
	I1129 09:17:06.648735  336858 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:17:06.648748  336858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:17:06.648801  336858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:17:06.648948  336858 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:17:06.648961  336858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:17:06.648987  336858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:17:06.649079  336858 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:17:06.649088  336858 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:17:06.649108  336858 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:17:06.649158  336858 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-632243 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-632243 localhost minikube]
	I1129 09:17:06.671719  336858 provision.go:177] copyRemoteCerts
	I1129 09:17:06.671792  336858 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:17:06.671835  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:06.691597  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:17:06.798451  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:17:06.823352  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1129 09:17:06.844669  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:17:06.864460  336858 provision.go:87] duration metric: took 235.597243ms to configureAuth
	I1129 09:17:06.864493  336858 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:17:06.864679  336858 config.go:182] Loaded profile config "default-k8s-diff-port-632243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:06.864807  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:06.885331  336858 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:06.885563  336858 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1129 09:17:06.885598  336858 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:17:07.241084  336858 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:17:07.241113  336858 machine.go:97] duration metric: took 4.132299373s to provisionDockerMachine
	I1129 09:17:07.241127  336858 start.go:293] postStartSetup for "default-k8s-diff-port-632243" (driver="docker")
	I1129 09:17:07.241140  336858 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:17:07.241197  336858 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:17:07.241245  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:07.263881  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:17:07.368872  336858 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:17:07.372875  336858 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:17:07.372910  336858 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:17:07.372925  336858 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:17:07.372988  336858 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:17:07.373115  336858 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:17:07.373246  336858 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:17:07.382330  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:07.402608  336858 start.go:296] duration metric: took 161.465373ms for postStartSetup
	I1129 09:17:07.402707  336858 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:17:07.402757  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:07.423633  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	W1129 09:17:05.035558  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	W1129 09:17:07.035927  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	I1129 09:17:07.099415  336547 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:17:07.104603  336547 fix.go:56] duration metric: took 4.791698243s for fixHost
	I1129 09:17:07.104629  336547 start.go:83] releasing machines lock for "embed-certs-160987", held for 4.791746655s
	I1129 09:17:07.104692  336547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-160987
	I1129 09:17:07.125915  336547 ssh_runner.go:195] Run: cat /version.json
	I1129 09:17:07.125936  336547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:17:07.125975  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:07.125998  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:07.147656  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:07.148010  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:07.310280  336547 ssh_runner.go:195] Run: systemctl --version
	I1129 09:17:07.317395  336547 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:17:07.355516  336547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:17:07.361026  336547 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:17:07.361114  336547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:17:07.369613  336547 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:17:07.369636  336547 start.go:496] detecting cgroup driver to use...
	I1129 09:17:07.369671  336547 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:17:07.369715  336547 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:17:07.385798  336547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:17:07.399325  336547 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:17:07.399395  336547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:17:07.415949  336547 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:17:07.430893  336547 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:17:07.515136  336547 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:17:07.611427  336547 docker.go:234] disabling docker service ...
	I1129 09:17:07.611489  336547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:17:07.627166  336547 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:17:07.641111  336547 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:17:07.723089  336547 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:17:07.818204  336547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:17:07.831025  336547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:17:07.846330  336547 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:17:07.846419  336547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:07.856031  336547 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:17:07.856109  336547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:07.866299  336547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:07.875986  336547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:07.886246  336547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:17:07.902869  336547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:07.913342  336547 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:07.922537  336547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:07.933484  336547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:17:07.941931  336547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:17:07.950899  336547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:08.049140  336547 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:17:08.188232  336547 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:17:08.188306  336547 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:17:08.192869  336547 start.go:564] Will wait 60s for crictl version
	I1129 09:17:08.192944  336547 ssh_runner.go:195] Run: which crictl
	I1129 09:17:08.197600  336547 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:17:08.231678  336547 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:17:08.231765  336547 ssh_runner.go:195] Run: crio --version
	I1129 09:17:08.261691  336547 ssh_runner.go:195] Run: crio --version
	I1129 09:17:08.294388  336547 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:17:07.524062  336858 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:17:07.528979  336858 fix.go:56] duration metric: took 4.796544465s for fixHost
	I1129 09:17:07.529007  336858 start.go:83] releasing machines lock for "default-k8s-diff-port-632243", held for 4.796594627s
	I1129 09:17:07.529084  336858 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-632243
	I1129 09:17:07.558318  336858 ssh_runner.go:195] Run: cat /version.json
	I1129 09:17:07.558368  336858 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:17:07.558379  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:07.558436  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:07.580444  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:17:07.580553  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:17:07.738087  336858 ssh_runner.go:195] Run: systemctl --version
	I1129 09:17:07.745208  336858 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:17:07.787017  336858 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:17:07.792279  336858 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:17:07.792352  336858 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:17:07.800809  336858 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:17:07.800837  336858 start.go:496] detecting cgroup driver to use...
	I1129 09:17:07.800879  336858 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:17:07.800933  336858 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:17:07.816342  336858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:17:07.831044  336858 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:17:07.831097  336858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:17:07.846186  336858 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:17:07.860320  336858 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:17:07.951414  336858 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:17:08.048776  336858 docker.go:234] disabling docker service ...
	I1129 09:17:08.048877  336858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:17:08.065070  336858 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:17:08.080000  336858 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:17:08.174957  336858 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:17:08.265742  336858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:17:08.280261  336858 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:17:08.297274  336858 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:17:08.297336  336858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:08.307809  336858 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:17:08.307898  336858 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:08.318419  336858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:08.328442  336858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:08.338982  336858 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:17:08.348380  336858 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:08.360069  336858 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:08.370279  336858 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:08.380806  336858 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:17:08.389764  336858 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:17:08.399350  336858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:08.488928  336858 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:17:08.641882  336858 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:17:08.641962  336858 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:17:08.646164  336858 start.go:564] Will wait 60s for crictl version
	I1129 09:17:08.646231  336858 ssh_runner.go:195] Run: which crictl
	I1129 09:17:08.650908  336858 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:17:08.679559  336858 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:17:08.679646  336858 ssh_runner.go:195] Run: crio --version
	I1129 09:17:08.714765  336858 ssh_runner.go:195] Run: crio --version
	I1129 09:17:08.759649  336858 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:17:08.295663  336547 cli_runner.go:164] Run: docker network inspect embed-certs-160987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:17:08.315261  336547 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 09:17:08.319652  336547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:08.331000  336547 kubeadm.go:884] updating cluster {Name:embed-certs-160987 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-160987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:17:08.331176  336547 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:08.331242  336547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:08.369832  336547 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:08.369897  336547 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:17:08.369961  336547 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:08.400037  336547 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:08.400061  336547 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:17:08.400071  336547 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1129 09:17:08.400201  336547 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-160987 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-160987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:17:08.400283  336547 ssh_runner.go:195] Run: crio config
	I1129 09:17:08.453899  336547 cni.go:84] Creating CNI manager for ""
	I1129 09:17:08.453939  336547 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:08.453960  336547 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:17:08.453995  336547 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-160987 NodeName:embed-certs-160987 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:17:08.454184  336547 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-160987"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:17:08.454263  336547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:17:08.462902  336547 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:17:08.462984  336547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:17:08.471522  336547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1129 09:17:08.485472  336547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:17:08.499649  336547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1129 09:17:08.515194  336547 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:17:08.519231  336547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:08.530697  336547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:08.626804  336547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:17:08.648449  336547 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987 for IP: 192.168.85.2
	I1129 09:17:08.648474  336547 certs.go:195] generating shared ca certs ...
	I1129 09:17:08.648496  336547 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:08.648684  336547 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:17:08.648741  336547 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:17:08.648753  336547 certs.go:257] generating profile certs ...
	I1129 09:17:08.648878  336547 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/client.key
	I1129 09:17:08.648943  336547 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.key.f7c4ad31
	I1129 09:17:08.648995  336547 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/proxy-client.key
	I1129 09:17:08.649151  336547 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:17:08.649200  336547 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:17:08.649214  336547 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:17:08.649253  336547 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:17:08.649291  336547 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:17:08.649329  336547 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:17:08.649411  336547 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:08.650143  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:17:08.672263  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:17:08.694521  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:17:08.717211  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:17:08.745154  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1129 09:17:08.768657  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:17:08.790607  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:17:08.809578  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/embed-certs-160987/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:17:08.833173  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:17:08.853546  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:17:08.875724  336547 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:17:08.897703  336547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:17:08.912195  336547 ssh_runner.go:195] Run: openssl version
	I1129 09:17:08.920667  336547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:17:08.930203  336547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:08.934417  336547 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:08.934483  336547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:08.972201  336547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:17:08.980892  336547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:17:08.990540  336547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:17:08.994704  336547 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:17:08.994760  336547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:17:09.038463  336547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:17:09.047452  336547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:17:09.057168  336547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:17:09.061215  336547 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:17:09.061286  336547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:17:09.097414  336547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:17:09.106145  336547 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:17:09.110918  336547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:17:09.153234  336547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:17:09.210510  336547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:17:09.279214  336547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:17:09.343494  336547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:17:09.407295  336547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:17:09.449079  336547 kubeadm.go:401] StartCluster: {Name:embed-certs-160987 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-160987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:09.449177  336547 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:17:09.449260  336547 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:17:09.485089  336547 cri.go:89] found id: "b910bdb65bdedc5ad424106b6aea90fdb221e9c9e03ce5e62c16682d9c219dbf"
	I1129 09:17:09.485114  336547 cri.go:89] found id: "d40c5061382593cad885d4b3c86be7a3641ec567ffe3cb652cfd84dd0c2396bf"
	I1129 09:17:09.485120  336547 cri.go:89] found id: "6ee1a1cef6abf99fe2be4154d33fa7e55335140b3c9fc7c979eabca17e682341"
	I1129 09:17:09.485124  336547 cri.go:89] found id: "062c767d0f027b4b3689a35cad7c6003a28dac146ef6a6e9732382f36ec71ffa"
	I1129 09:17:09.485137  336547 cri.go:89] found id: ""
	I1129 09:17:09.485190  336547 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 09:17:09.498914  336547 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:09Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:17:09.498987  336547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:17:09.508114  336547 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:17:09.508133  336547 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:17:09.508192  336547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:17:09.516864  336547 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:17:09.517689  336547 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-160987" does not appear in /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:09.518218  336547 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-5652/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-160987" cluster setting kubeconfig missing "embed-certs-160987" context setting]
	I1129 09:17:09.519050  336547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:09.520972  336547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:17:09.530092  336547 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1129 09:17:09.530138  336547 kubeadm.go:602] duration metric: took 21.99531ms to restartPrimaryControlPlane
	I1129 09:17:09.530148  336547 kubeadm.go:403] duration metric: took 81.080412ms to StartCluster
	I1129 09:17:09.530167  336547 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:09.530328  336547 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:09.532249  336547 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:09.532528  336547 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:17:09.532861  336547 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:17:09.532996  336547 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:09.533015  336547 addons.go:70] Setting dashboard=true in profile "embed-certs-160987"
	I1129 09:17:09.533039  336547 addons.go:239] Setting addon dashboard=true in "embed-certs-160987"
	W1129 09:17:09.533048  336547 addons.go:248] addon dashboard should already be in state true
	I1129 09:17:09.533060  336547 addons.go:70] Setting default-storageclass=true in profile "embed-certs-160987"
	I1129 09:17:09.533075  336547 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-160987"
	I1129 09:17:09.533137  336547 host.go:66] Checking if "embed-certs-160987" exists ...
	I1129 09:17:09.533386  336547 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:17:09.533626  336547 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:17:09.533796  336547 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-160987"
	I1129 09:17:09.533855  336547 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-160987"
	W1129 09:17:09.533865  336547 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:17:09.533889  336547 host.go:66] Checking if "embed-certs-160987" exists ...
	I1129 09:17:09.534401  336547 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:17:09.534604  336547 out.go:179] * Verifying Kubernetes components...
	I1129 09:17:09.538689  336547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:09.563549  336547 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:17:09.563552  336547 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 09:17:09.565020  336547 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:17:09.565097  336547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:17:09.565173  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:09.565046  336547 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 09:17:08.761196  336858 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-632243 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:17:08.782404  336858 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1129 09:17:08.787056  336858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:08.798913  336858 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-632243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:17:08.799029  336858 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:08.799079  336858 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:08.837350  336858 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:08.837372  336858 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:17:08.837428  336858 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:08.866420  336858 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:08.866442  336858 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:17:08.866449  336858 kubeadm.go:935] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1129 09:17:08.866564  336858 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-632243 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:17:08.866626  336858 ssh_runner.go:195] Run: crio config
	I1129 09:17:08.919714  336858 cni.go:84] Creating CNI manager for ""
	I1129 09:17:08.919737  336858 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:08.919750  336858 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:17:08.919771  336858 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-632243 NodeName:default-k8s-diff-port-632243 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:17:08.919920  336858 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-632243"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:17:08.919985  336858 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:17:08.929015  336858 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:17:08.929074  336858 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:17:08.937965  336858 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1129 09:17:08.952738  336858 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:17:08.966732  336858 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1129 09:17:08.980728  336858 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:17:08.984827  336858 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:08.995990  336858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:09.082661  336858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:17:09.111797  336858 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243 for IP: 192.168.103.2
	I1129 09:17:09.111822  336858 certs.go:195] generating shared ca certs ...
	I1129 09:17:09.111866  336858 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:09.112052  336858 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:17:09.112688  336858 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:17:09.112726  336858 certs.go:257] generating profile certs ...
	I1129 09:17:09.112921  336858 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/client.key
	I1129 09:17:09.113021  336858 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.key.6a7d6562
	I1129 09:17:09.113086  336858 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/proxy-client.key
	I1129 09:17:09.113257  336858 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:17:09.113299  336858 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:17:09.113313  336858 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:17:09.113357  336858 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:17:09.113402  336858 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:17:09.113445  336858 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:17:09.113511  336858 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:09.115190  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:17:09.137644  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:17:09.158988  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:17:09.189279  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:17:09.225046  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1129 09:17:09.258006  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:17:09.287280  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:17:09.321294  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/default-k8s-diff-port-632243/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:17:09.355385  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:17:09.382688  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:17:09.410146  336858 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:17:09.430817  336858 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:17:09.447052  336858 ssh_runner.go:195] Run: openssl version
	I1129 09:17:09.455827  336858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:17:09.466702  336858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:17:09.471733  336858 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:17:09.471813  336858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:17:09.512946  336858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:17:09.522854  336858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:17:09.532621  336858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:09.540422  336858 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:09.540571  336858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:09.608410  336858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:17:09.630622  336858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:17:09.646595  336858 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:17:09.653373  336858 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:17:09.653440  336858 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:17:09.715168  336858 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:17:09.728975  336858 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:17:09.735349  336858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:17:09.816627  336858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:17:09.884865  336858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:17:09.953256  336858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:17:10.016619  336858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:17:10.089295  336858 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:17:10.150348  336858 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-632243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-632243 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:10.150515  336858 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:17:10.150606  336858 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:17:10.193491  336858 cri.go:89] found id: "b13c8a23740acd98b7a6a7244c86241544729c4895bf870e9bb842604451a0f4"
	I1129 09:17:10.193511  336858 cri.go:89] found id: "2080eaa5b786c79ead07692c870ce9928ace57a47032f699d66882570b205513"
	I1129 09:17:10.193515  336858 cri.go:89] found id: "c75e80b4e2dbb59237ca7e83b6a87a80d377951cce4c561324de39b3ea24a433"
	I1129 09:17:10.193518  336858 cri.go:89] found id: "be8adeee9f904b03165bd07f7f9279fad60f6e70a12d988e651be3f8e0e5974c"
	I1129 09:17:10.193602  336858 cri.go:89] found id: ""
	I1129 09:17:10.193643  336858 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 09:17:10.213937  336858 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:10Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:17:10.214028  336858 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:17:10.232297  336858 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:17:10.232322  336858 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:17:10.232369  336858 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:17:10.243858  336858 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:17:10.245377  336858 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-632243" does not appear in /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:10.246716  336858 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-5652/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-632243" cluster setting kubeconfig missing "default-k8s-diff-port-632243" context setting]
	I1129 09:17:10.248401  336858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:10.251027  336858 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:17:10.263329  336858 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1129 09:17:10.263368  336858 kubeadm.go:602] duration metric: took 31.039015ms to restartPrimaryControlPlane
	I1129 09:17:10.263379  336858 kubeadm.go:403] duration metric: took 113.153865ms to StartCluster
	I1129 09:17:10.263398  336858 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:10.263462  336858 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:10.269465  336858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:10.270140  336858 config.go:182] Loaded profile config "default-k8s-diff-port-632243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:10.270076  336858 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:17:10.270261  336858 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-632243"
	I1129 09:17:10.270287  336858 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-632243"
	W1129 09:17:10.270304  336858 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:17:10.270337  336858 host.go:66] Checking if "default-k8s-diff-port-632243" exists ...
	I1129 09:17:10.270868  336858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:17:10.270921  336858 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-632243"
	I1129 09:17:10.271099  336858 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-632243"
	W1129 09:17:10.271120  336858 addons.go:248] addon dashboard should already be in state true
	I1129 09:17:10.271162  336858 host.go:66] Checking if "default-k8s-diff-port-632243" exists ...
	I1129 09:17:10.271803  336858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:17:10.269903  336858 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:17:10.270943  336858 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-632243"
	I1129 09:17:10.272544  336858 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-632243"
	I1129 09:17:10.272879  336858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:17:10.275285  336858 out.go:179] * Verifying Kubernetes components...
	I1129 09:17:10.276652  336858 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:10.307250  336858 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:17:10.308950  336858 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:17:10.308973  336858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:17:10.309046  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:10.315630  336858 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 09:17:10.317010  336858 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1129 09:17:06.712239  328395 pod_ready.go:104] pod "coredns-5dd5756b68-lwg8c" is not "Ready", error: <nil>
	W1129 09:17:08.713985  328395 pod_ready.go:104] pod "coredns-5dd5756b68-lwg8c" is not "Ready", error: <nil>
	I1129 09:17:10.218648  328395 pod_ready.go:94] pod "coredns-5dd5756b68-lwg8c" is "Ready"
	I1129 09:17:10.218682  328395 pod_ready.go:86] duration metric: took 38.012873691s for pod "coredns-5dd5756b68-lwg8c" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:10.224585  328395 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:10.232120  328395 pod_ready.go:94] pod "etcd-old-k8s-version-680646" is "Ready"
	I1129 09:17:10.232250  328395 pod_ready.go:86] duration metric: took 7.637262ms for pod "etcd-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:10.236139  328395 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:10.242626  328395 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-680646" is "Ready"
	I1129 09:17:10.242729  328395 pod_ready.go:86] duration metric: took 6.562994ms for pod "kube-apiserver-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:10.248098  328395 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:10.412947  328395 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-680646" is "Ready"
	I1129 09:17:10.412986  328395 pod_ready.go:86] duration metric: took 164.851946ms for pod "kube-controller-manager-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:10.611271  328395 pod_ready.go:83] waiting for pod "kube-proxy-plgmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:11.010447  328395 pod_ready.go:94] pod "kube-proxy-plgmf" is "Ready"
	I1129 09:17:11.010483  328395 pod_ready.go:86] duration metric: took 399.180359ms for pod "kube-proxy-plgmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:11.211887  328395 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:11.611656  328395 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-680646" is "Ready"
	I1129 09:17:11.611690  328395 pod_ready.go:86] duration metric: took 399.761614ms for pod "kube-scheduler-old-k8s-version-680646" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:11.611706  328395 pod_ready.go:40] duration metric: took 39.410470281s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:11.681062  328395 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1129 09:17:11.683136  328395 out.go:203] 
	W1129 09:17:11.684454  328395 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1129 09:17:11.685606  328395 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1129 09:17:11.686829  328395 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-680646" cluster and "default" namespace by default
	I1129 09:17:09.566159  336547 addons.go:239] Setting addon default-storageclass=true in "embed-certs-160987"
	W1129 09:17:09.566185  336547 addons.go:248] addon default-storageclass should already be in state true
	I1129 09:17:09.566212  336547 host.go:66] Checking if "embed-certs-160987" exists ...
	I1129 09:17:09.566696  336547 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:17:09.567447  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 09:17:09.567464  336547 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 09:17:09.567519  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:09.595351  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:09.605909  336547 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:17:09.605948  336547 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:17:09.606137  336547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:17:09.616139  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:09.635385  336547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:17:09.727816  336547 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:17:09.749525  336547 node_ready.go:35] waiting up to 6m0s for node "embed-certs-160987" to be "Ready" ...
	I1129 09:17:09.750531  336547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:17:09.766303  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 09:17:09.766333  336547 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 09:17:09.805890  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 09:17:09.805924  336547 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 09:17:09.811927  336547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:17:09.854564  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 09:17:09.854599  336547 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 09:17:09.914496  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 09:17:09.914520  336547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 09:17:09.938527  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 09:17:09.938549  336547 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 09:17:09.962528  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 09:17:09.962577  336547 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 09:17:09.983095  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 09:17:09.983129  336547 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 09:17:10.005127  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 09:17:10.005249  336547 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 09:17:10.036384  336547 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:17:10.036413  336547 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 09:17:10.058137  336547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:17:11.827730  336547 node_ready.go:49] node "embed-certs-160987" is "Ready"
	I1129 09:17:11.827773  336547 node_ready.go:38] duration metric: took 2.078209043s for node "embed-certs-160987" to be "Ready" ...
	I1129 09:17:11.827790  336547 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:17:11.827861  336547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:17:10.320018  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 09:17:10.320041  336858 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 09:17:10.320108  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:10.327689  336858 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-632243"
	W1129 09:17:10.327721  336858 addons.go:248] addon default-storageclass should already be in state true
	I1129 09:17:10.327747  336858 host.go:66] Checking if "default-k8s-diff-port-632243" exists ...
	I1129 09:17:10.328232  336858 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:17:10.352961  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:17:10.370131  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:17:10.378023  336858 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:17:10.378057  336858 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:17:10.378127  336858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:10.418800  336858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:17:10.488098  336858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:17:10.513724  336858 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-632243" to be "Ready" ...
	I1129 09:17:10.535707  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 09:17:10.535731  336858 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 09:17:10.557565  336858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:17:10.557655  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 09:17:10.557666  336858 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 09:17:10.580874  336858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:17:10.589021  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 09:17:10.589119  336858 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 09:17:10.664545  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 09:17:10.664568  336858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 09:17:10.689645  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 09:17:10.689743  336858 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 09:17:10.715183  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 09:17:10.715208  336858 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 09:17:10.739564  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 09:17:10.739612  336858 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 09:17:10.764832  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 09:17:10.764880  336858 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 09:17:10.786820  336858 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:17:10.786876  336858 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 09:17:10.811112  336858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:17:12.118076  336858 node_ready.go:49] node "default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:12.119135  336858 node_ready.go:38] duration metric: took 1.6053343s for node "default-k8s-diff-port-632243" to be "Ready" ...
	I1129 09:17:12.119742  336858 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:17:12.120049  336858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:17:12.769752  336547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.019178744s)
	I1129 09:17:12.770100  336547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.711923947s)
	I1129 09:17:12.770290  336547 api_server.go:72] duration metric: took 3.237730119s to wait for apiserver process to appear ...
	I1129 09:17:12.770305  336547 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:17:12.770326  336547 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:17:12.769946  336547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.957969562s)
	I1129 09:17:12.774695  336547 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-160987 addons enable metrics-server
	
	I1129 09:17:12.776427  336547 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:17:12.776462  336547 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:17:12.790995  336547 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1129 09:17:12.895120  336858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.337512829s)
	I1129 09:17:12.895216  336858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.314316305s)
	I1129 09:17:12.895574  336858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.084422942s)
	I1129 09:17:12.896112  336858 api_server.go:72] duration metric: took 2.623523551s to wait for apiserver process to appear ...
	I1129 09:17:12.896132  336858 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:17:12.896156  336858 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1129 09:17:12.899702  336858 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-632243 addons enable metrics-server
	
	I1129 09:17:12.903738  336858 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:17:12.903764  336858 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:17:12.906550  336858 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1129 09:17:09.037752  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	W1129 09:17:11.038421  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	I1129 09:17:12.792549  336547 addons.go:530] duration metric: took 3.259698336s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1129 09:17:13.270675  336547 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:17:13.276075  336547 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1129 09:17:13.277477  336547 api_server.go:141] control plane version: v1.34.1
	I1129 09:17:13.277517  336547 api_server.go:131] duration metric: took 507.203499ms to wait for apiserver health ...
	I1129 09:17:13.277529  336547 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:17:13.282281  336547 system_pods.go:59] 8 kube-system pods found
	I1129 09:17:13.282329  336547 system_pods.go:61] "coredns-66bc5c9577-ptx67" [3cdde537-5064-49d7-8c8b-367639774c63] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:13.282346  336547 system_pods.go:61] "etcd-embed-certs-160987" [347faf57-8141-49d9-8ef9-6a1b04b8641a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:17:13.282364  336547 system_pods.go:61] "kindnet-cvmj6" [239c4b88-9d52-42da-ae39-5eb83d7d3fd1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 09:17:13.282374  336547 system_pods.go:61] "kube-apiserver-embed-certs-160987" [27540c8b-5b66-40c8-91e4-299a0450fd50] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:17:13.282387  336547 system_pods.go:61] "kube-controller-manager-embed-certs-160987" [33fd03e7-f337-4fee-b783-ffa135030207] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:17:13.282397  336547 system_pods.go:61] "kube-proxy-57l9h" [93cda014-998a-4285-81c6-bead54a287e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:17:13.282408  336547 system_pods.go:61] "kube-scheduler-embed-certs-160987" [98695b36-0694-44ff-a494-f8316190fcad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:17:13.282415  336547 system_pods.go:61] "storage-provisioner" [3e04560b-9e25-4b2e-9f7e-d55b0ae42dbd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:17:13.282425  336547 system_pods.go:74] duration metric: took 4.889359ms to wait for pod list to return data ...
	I1129 09:17:13.282440  336547 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:17:13.285794  336547 default_sa.go:45] found service account: "default"
	I1129 09:17:13.285823  336547 default_sa.go:55] duration metric: took 3.376181ms for default service account to be created ...
	I1129 09:17:13.285835  336547 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:17:13.289736  336547 system_pods.go:86] 8 kube-system pods found
	I1129 09:17:13.289778  336547 system_pods.go:89] "coredns-66bc5c9577-ptx67" [3cdde537-5064-49d7-8c8b-367639774c63] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:13.289789  336547 system_pods.go:89] "etcd-embed-certs-160987" [347faf57-8141-49d9-8ef9-6a1b04b8641a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:17:13.289801  336547 system_pods.go:89] "kindnet-cvmj6" [239c4b88-9d52-42da-ae39-5eb83d7d3fd1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 09:17:13.289813  336547 system_pods.go:89] "kube-apiserver-embed-certs-160987" [27540c8b-5b66-40c8-91e4-299a0450fd50] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:17:13.289828  336547 system_pods.go:89] "kube-controller-manager-embed-certs-160987" [33fd03e7-f337-4fee-b783-ffa135030207] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:17:13.289836  336547 system_pods.go:89] "kube-proxy-57l9h" [93cda014-998a-4285-81c6-bead54a287e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:17:13.289864  336547 system_pods.go:89] "kube-scheduler-embed-certs-160987" [98695b36-0694-44ff-a494-f8316190fcad] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:17:13.289872  336547 system_pods.go:89] "storage-provisioner" [3e04560b-9e25-4b2e-9f7e-d55b0ae42dbd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:17:13.289883  336547 system_pods.go:126] duration metric: took 4.009264ms to wait for k8s-apps to be running ...
	I1129 09:17:13.289900  336547 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:17:13.289966  336547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:17:13.307331  336547 system_svc.go:56] duration metric: took 17.413222ms WaitForService to wait for kubelet
	I1129 09:17:13.307366  336547 kubeadm.go:587] duration metric: took 3.7748063s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:17:13.307391  336547 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:17:13.311371  336547 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:17:13.311405  336547 node_conditions.go:123] node cpu capacity is 8
	I1129 09:17:13.311428  336547 node_conditions.go:105] duration metric: took 4.030874ms to run NodePressure ...
	I1129 09:17:13.311443  336547 start.go:242] waiting for startup goroutines ...
	I1129 09:17:13.311458  336547 start.go:247] waiting for cluster config update ...
	I1129 09:17:13.311489  336547 start.go:256] writing updated cluster config ...
	I1129 09:17:13.311895  336547 ssh_runner.go:195] Run: rm -f paused
	I1129 09:17:13.316918  336547 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:13.321892  336547 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ptx67" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:17:15.327300  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	I1129 09:17:12.907626  336858 addons.go:530] duration metric: took 2.637558772s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1129 09:17:13.396730  336858 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1129 09:17:13.401351  336858 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1129 09:17:13.402337  336858 api_server.go:141] control plane version: v1.34.1
	I1129 09:17:13.402363  336858 api_server.go:131] duration metric: took 506.224229ms to wait for apiserver health ...
	I1129 09:17:13.402372  336858 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:17:13.405669  336858 system_pods.go:59] 8 kube-system pods found
	I1129 09:17:13.405707  336858 system_pods.go:61] "coredns-66bc5c9577-z4m7c" [98358d85-a090-44af-b52c-b5043215489d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:13.405715  336858 system_pods.go:61] "etcd-default-k8s-diff-port-632243" [09a34b15-fbfc-4348-90c4-e24e6baf1a19] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:17:13.405720  336858 system_pods.go:61] "kindnet-tpstm" [15e600f0-69fa-43be-ad87-07a80e245c73] Running
	I1129 09:17:13.405727  336858 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-632243" [05294706-b493-4660-8b69-19a3686ec539] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:17:13.405735  336858 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-632243" [fb12ecb8-1c38-404c-b1f5-c52bd3c76ae3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:17:13.405739  336858 system_pods.go:61] "kube-proxy-p2nf7" [50905f73-5af2-401c-a482-7d68d8d3bdc4] Running
	I1129 09:17:13.405744  336858 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-632243" [31003176-dbcb-4f15-88c6-ea1592ffdf1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:17:13.405755  336858 system_pods.go:61] "storage-provisioner" [b28962e0-c388-44d7-8e57-e4030e80dabd] Running
	I1129 09:17:13.405761  336858 system_pods.go:74] duration metric: took 3.383976ms to wait for pod list to return data ...
	I1129 09:17:13.405768  336858 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:17:13.408960  336858 default_sa.go:45] found service account: "default"
	I1129 09:17:13.409075  336858 default_sa.go:55] duration metric: took 3.291457ms for default service account to be created ...
	I1129 09:17:13.409095  336858 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:17:13.412512  336858 system_pods.go:86] 8 kube-system pods found
	I1129 09:17:13.412548  336858 system_pods.go:89] "coredns-66bc5c9577-z4m7c" [98358d85-a090-44af-b52c-b5043215489d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:17:13.412562  336858 system_pods.go:89] "etcd-default-k8s-diff-port-632243" [09a34b15-fbfc-4348-90c4-e24e6baf1a19] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:17:13.412574  336858 system_pods.go:89] "kindnet-tpstm" [15e600f0-69fa-43be-ad87-07a80e245c73] Running
	I1129 09:17:13.412585  336858 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-632243" [05294706-b493-4660-8b69-19a3686ec539] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:17:13.412596  336858 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-632243" [fb12ecb8-1c38-404c-b1f5-c52bd3c76ae3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:17:13.412600  336858 system_pods.go:89] "kube-proxy-p2nf7" [50905f73-5af2-401c-a482-7d68d8d3bdc4] Running
	I1129 09:17:13.412610  336858 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-632243" [31003176-dbcb-4f15-88c6-ea1592ffdf1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:17:13.412614  336858 system_pods.go:89] "storage-provisioner" [b28962e0-c388-44d7-8e57-e4030e80dabd] Running
	I1129 09:17:13.412622  336858 system_pods.go:126] duration metric: took 3.519364ms to wait for k8s-apps to be running ...
	I1129 09:17:13.412638  336858 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:17:13.412691  336858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:17:13.429973  336858 system_svc.go:56] duration metric: took 17.326281ms WaitForService to wait for kubelet
	I1129 09:17:13.430007  336858 kubeadm.go:587] duration metric: took 3.157528585s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:17:13.430028  336858 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:17:13.433137  336858 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:17:13.433164  336858 node_conditions.go:123] node cpu capacity is 8
	I1129 09:17:13.433177  336858 node_conditions.go:105] duration metric: took 3.143636ms to run NodePressure ...
	I1129 09:17:13.433188  336858 start.go:242] waiting for startup goroutines ...
	I1129 09:17:13.433195  336858 start.go:247] waiting for cluster config update ...
	I1129 09:17:13.433207  336858 start.go:256] writing updated cluster config ...
	I1129 09:17:13.433491  336858 ssh_runner.go:195] Run: rm -f paused
	I1129 09:17:13.437630  336858 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:13.441716  336858 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z4m7c" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:17:15.448064  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:17.449407  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:13.536780  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	W1129 09:17:16.036205  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	W1129 09:17:18.036831  331191 pod_ready.go:104] pod "coredns-66bc5c9577-85hh2" is not "Ready", error: <nil>
	I1129 09:17:18.537932  331191 pod_ready.go:94] pod "coredns-66bc5c9577-85hh2" is "Ready"
	I1129 09:17:18.537961  331191 pod_ready.go:86] duration metric: took 34.507701467s for pod "coredns-66bc5c9577-85hh2" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:18.541894  331191 pod_ready.go:83] waiting for pod "etcd-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:18.546925  331191 pod_ready.go:94] pod "etcd-no-preload-897274" is "Ready"
	I1129 09:17:18.546955  331191 pod_ready.go:86] duration metric: took 5.03231ms for pod "etcd-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:18.550208  331191 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:18.555380  331191 pod_ready.go:94] pod "kube-apiserver-no-preload-897274" is "Ready"
	I1129 09:17:18.555410  331191 pod_ready.go:86] duration metric: took 5.173912ms for pod "kube-apiserver-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:18.558304  331191 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:18.735161  331191 pod_ready.go:94] pod "kube-controller-manager-no-preload-897274" is "Ready"
	I1129 09:17:18.735191  331191 pod_ready.go:86] duration metric: took 176.860384ms for pod "kube-controller-manager-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:18.935924  331191 pod_ready.go:83] waiting for pod "kube-proxy-h9zhz" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:19.335548  331191 pod_ready.go:94] pod "kube-proxy-h9zhz" is "Ready"
	I1129 09:17:19.335626  331191 pod_ready.go:86] duration metric: took 399.669ms for pod "kube-proxy-h9zhz" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:19.534980  331191 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:19.935951  331191 pod_ready.go:94] pod "kube-scheduler-no-preload-897274" is "Ready"
	I1129 09:17:19.935986  331191 pod_ready.go:86] duration metric: took 400.979445ms for pod "kube-scheduler-no-preload-897274" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:19.936003  331191 pod_ready.go:40] duration metric: took 35.910372067s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:19.998301  331191 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:17:20.030076  331191 out.go:179] * Done! kubectl is now configured to use "no-preload-897274" cluster and "default" namespace by default
	W1129 09:17:17.329923  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:19.833757  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:19.450200  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:21.948576  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:22.327825  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:24.328716  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:26.828673  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:24.448050  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:26.947585  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 29 09:16:49 old-k8s-version-680646 crio[566]: time="2025-11-29T09:16:49.994699634Z" level=info msg="Created container 61ce0b8ff133dd7770871455d52b8eb5571079a0a2609fadc954e3bec70465cd: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mn66t/kubernetes-dashboard" id=ce5f9ca1-b2ba-4fed-91d6-4e711445dbd3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:16:49 old-k8s-version-680646 crio[566]: time="2025-11-29T09:16:49.995310868Z" level=info msg="Starting container: 61ce0b8ff133dd7770871455d52b8eb5571079a0a2609fadc954e3bec70465cd" id=fb7e9cc4-b2e0-4e10-96da-9047ee6678bb name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:16:49 old-k8s-version-680646 crio[566]: time="2025-11-29T09:16:49.997011636Z" level=info msg="Started container" PID=1746 containerID=61ce0b8ff133dd7770871455d52b8eb5571079a0a2609fadc954e3bec70465cd description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mn66t/kubernetes-dashboard id=fb7e9cc4-b2e0-4e10-96da-9047ee6678bb name=/runtime.v1.RuntimeService/StartContainer sandboxID=56bf2e95637320ec269ba0d7e2915a6318c4fc05c684986d39cf38d50a770511
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.36623605Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=97d52a04-14d4-46aa-bbc1-fd61caa52c55 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.367270051Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1a7b4595-59c1-4016-a512-dbc8888deb13 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.368446289Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c098ab86-36b7-4f2a-a2df-ceb05228a93b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.368607507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.373290484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.373554278Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/491028c8497b0ee8af717e4994dbfea4fa278e870319eb4f4752d9d68d653924/merged/etc/passwd: no such file or directory"
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.373585929Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/491028c8497b0ee8af717e4994dbfea4fa278e870319eb4f4752d9d68d653924/merged/etc/group: no such file or directory"
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.373880932Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.409795372Z" level=info msg="Created container c5f2e76d762bbed4aac24938d20ed8ef6bc68a75c8faeace43ca72adfebaa06f: kube-system/storage-provisioner/storage-provisioner" id=c098ab86-36b7-4f2a-a2df-ceb05228a93b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.410623865Z" level=info msg="Starting container: c5f2e76d762bbed4aac24938d20ed8ef6bc68a75c8faeace43ca72adfebaa06f" id=41c210e6-6f95-410f-a826-69f3de7ac5f9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:02 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:02.412511378Z" level=info msg="Started container" PID=1768 containerID=c5f2e76d762bbed4aac24938d20ed8ef6bc68a75c8faeace43ca72adfebaa06f description=kube-system/storage-provisioner/storage-provisioner id=41c210e6-6f95-410f-a826-69f3de7ac5f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c9220bd7ef8fc70d03f523d922f4a8ee357b0d1c67fd53cbecbc7ea6f0e1ff11
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.248589002Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a9c52618-016b-4c84-95fe-70adf48227f8 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.250500594Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=da2a4d1c-8caa-46db-8a3a-cd2ab575537d name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.254344589Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv/dashboard-metrics-scraper" id=82c8d848-cc21-406e-8072-6e110d5dcbc4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.254539119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.264597113Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.265354191Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.306445923Z" level=info msg="Created container e4c8a4234fb3187bd7599e962d6a85a954434a95064cb491b1e31a4ed482fd06: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv/dashboard-metrics-scraper" id=82c8d848-cc21-406e-8072-6e110d5dcbc4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.307466682Z" level=info msg="Starting container: e4c8a4234fb3187bd7599e962d6a85a954434a95064cb491b1e31a4ed482fd06" id=8e87c6c5-359b-4452-b089-d3130040b0da name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.310405195Z" level=info msg="Started container" PID=1784 containerID=e4c8a4234fb3187bd7599e962d6a85a954434a95064cb491b1e31a4ed482fd06 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv/dashboard-metrics-scraper id=8e87c6c5-359b-4452-b089-d3130040b0da name=/runtime.v1.RuntimeService/StartContainer sandboxID=e57bda849df2e63f8f8866c8254f09e7b86f9be1891897aef0dc5e274576168e
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.387086537Z" level=info msg="Removing container: 8ce55b594fee130d22c7869b97adc182b5a3fb462788336518e3e0129a8b1a65" id=c00c5af5-bf6d-47bc-9302-473b7a81c2cd name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:09 old-k8s-version-680646 crio[566]: time="2025-11-29T09:17:09.403496931Z" level=info msg="Removed container 8ce55b594fee130d22c7869b97adc182b5a3fb462788336518e3e0129a8b1a65: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv/dashboard-metrics-scraper" id=c00c5af5-bf6d-47bc-9302-473b7a81c2cd name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	e4c8a4234fb31       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago       Exited              dashboard-metrics-scraper   2                   e57bda849df2e       dashboard-metrics-scraper-5f989dc9cf-8d8lv       kubernetes-dashboard
	c5f2e76d762bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   c9220bd7ef8fc       storage-provisioner                              kube-system
	61ce0b8ff133d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago       Running             kubernetes-dashboard        0                   56bf2e9563732       kubernetes-dashboard-8694d4445c-mn66t            kubernetes-dashboard
	f7b192ed98d03       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago       Running             kindnet-cni                 0                   0d1bb0b0c97de       kindnet-xjmpm                                    kube-system
	e9a43784b2c72       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           56 seconds ago       Running             kube-proxy                  0                   b64f03c38e837       kube-proxy-plgmf                                 kube-system
	4eeb9cde84ff0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago       Exited              storage-provisioner         0                   c9220bd7ef8fc       storage-provisioner                              kube-system
	6fc7cfe8003b7       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago       Running             busybox                     1                   f13282378c76d       busybox                                          default
	56c6159514de4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           56 seconds ago       Running             coredns                     0                   f0e707b6e0a25       coredns-5dd5756b68-lwg8c                         kube-system
	f619cafca5a17       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   f50cb89565963       etcd-old-k8s-version-680646                      kube-system
	fc21916bee97c       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   13bee11f22643       kube-controller-manager-old-k8s-version-680646   kube-system
	30573a3cd5db7       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   0fa354d9ee3c0       kube-apiserver-old-k8s-version-680646            kube-system
	719922d67f462       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   5b20d26610a4d       kube-scheduler-old-k8s-version-680646            kube-system
	
	
	==> coredns [56c6159514de487ff8175db94d66f55079bfff299bcf0181130cc9274ba6fbd4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59208 - 62803 "HINFO IN 3617868267834132906.4431784462698880449. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030677404s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-680646
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-680646
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=old-k8s-version-680646
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_15_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:15:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-680646
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:17:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:17:01 +0000   Sat, 29 Nov 2025 09:15:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:17:01 +0000   Sat, 29 Nov 2025 09:15:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:17:01 +0000   Sat, 29 Nov 2025 09:15:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:17:01 +0000   Sat, 29 Nov 2025 09:15:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-680646
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                3f6721fd-aca4-48a4-bf5d-00d6fd2bc52a
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-lwg8c                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-old-k8s-version-680646                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m5s
	  kube-system                 kindnet-xjmpm                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-680646             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-680646    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-plgmf                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-680646             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-8d8lv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-mn66t             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 110s                   kube-proxy       
	  Normal  Starting                 56s                    kube-proxy       
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x9 over 2m10s)  kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-680646 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m10s)  kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m4s                   kubelet          Node old-k8s-version-680646 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m4s                   kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m4s                   kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s                   node-controller  Node old-k8s-version-680646 event: Registered Node old-k8s-version-680646 in Controller
	  Normal  NodeReady                99s                    kubelet          Node old-k8s-version-680646 status is now: NodeReady
	  Normal  Starting                 61s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x9 over 61s)      kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node old-k8s-version-680646 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x7 over 61s)      kubelet          Node old-k8s-version-680646 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                    node-controller  Node old-k8s-version-680646 event: Registered Node old-k8s-version-680646 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 e1 87 e4 84 ae 08 06
	[  +0.002640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[ +14.778105] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +13.520989] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 32 aa eb 52 77 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +14.762906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df ae 93 da f6 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[Nov29 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[ +13.297626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	[  +7.390906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 68 f3 3d 3c e9 08 06
	[  +0.000436] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[  +7.145922] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea c3 2f 2a a3 01 08 06
	[  +0.000415] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	
	
	==> etcd [f619cafca5a17742a3c6fba5014451687d7d35e25977a157e5be1c8489be5079] <==
	{"level":"info","ts":"2025-11-29T09:16:27.814999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-29T09:16:27.815069Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-29T09:16:27.815237Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:16:27.81532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:16:27.818566Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-29T09:16:27.818635Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-29T09:16:27.818988Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-29T09:16:27.818939Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-29T09:16:27.819004Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-29T09:16:29.607738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-29T09:16:29.607786Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-29T09:16:29.6078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-29T09:16:29.607812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-29T09:16:29.607817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-29T09:16:29.607825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-29T09:16:29.607833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-29T09:16:29.610145Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-680646 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-29T09:16:29.610238Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T09:16:29.610236Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T09:16:29.610547Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-29T09:16:29.610859Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-29T09:16:29.611924Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-29T09:16:29.611923Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"warn","ts":"2025-11-29T09:17:01.593066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.766777ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356969507594503 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/old-k8s-version-680646\" mod_revision:639 > success:<request_put:<key:\"/registry/leases/kube-node-lease/old-k8s-version-680646\" value_size:514 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/old-k8s-version-680646\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-29T09:17:01.593189Z","caller":"traceutil/trace.go:171","msg":"trace[983389728] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"210.094117ms","start":"2025-11-29T09:17:01.383077Z","end":"2025-11-29T09:17:01.593171Z","steps":["trace[983389728] 'process raft request'  (duration: 93.572287ms)","trace[983389728] 'compare'  (duration: 115.673692ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:17:28 up  1:00,  0 user,  load average: 3.75, 3.86, 2.50
	Linux old-k8s-version-680646 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f7b192ed98d03e41ad05d96225f71cd6ca9e5e80615108419e5489cfe0ae91e8] <==
	I1129 09:16:32.101265       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:16:32.101601       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 09:16:32.101796       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:16:32.101821       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:16:32.114977       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:16:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:16:32.319622       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:16:32.319657       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:16:32.319670       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:16:32.319865       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:16:32.701047       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:16:32.701108       1 metrics.go:72] Registering metrics
	I1129 09:16:32.701189       1 controller.go:711] "Syncing nftables rules"
	I1129 09:16:42.319655       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:16:42.319713       1 main.go:301] handling current node
	I1129 09:16:52.319930       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:16:52.319985       1 main.go:301] handling current node
	I1129 09:17:02.320028       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:17:02.320096       1 main.go:301] handling current node
	I1129 09:17:12.320969       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:17:12.321015       1 main.go:301] handling current node
	I1129 09:17:22.325947       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:17:22.325988       1 main.go:301] handling current node
	
	
	==> kube-apiserver [30573a3cd5db71fef67e2dd17636eef9fcc8eb82fe36a7ff2ed1d3a6ca9f1919] <==
	I1129 09:16:30.592102       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1129 09:16:30.634290       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:16:30.662032       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1129 09:16:30.664395       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1129 09:16:30.664448       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 09:16:30.664448       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1129 09:16:30.664659       1 aggregator.go:166] initial CRD sync complete...
	I1129 09:16:30.664671       1 autoregister_controller.go:141] Starting autoregister controller
	I1129 09:16:30.664692       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 09:16:30.664701       1 cache.go:39] Caches are synced for autoregister controller
	I1129 09:16:30.664731       1 shared_informer.go:318] Caches are synced for configmaps
	I1129 09:16:30.672392       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1129 09:16:30.676763       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1129 09:16:30.676788       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1129 09:16:31.516984       1 controller.go:624] quota admission added evaluator for: namespaces
	I1129 09:16:31.553215       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1129 09:16:31.572488       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:16:31.581038       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:16:31.591614       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:16:31.604458       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1129 09:16:31.645570       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.159.34"}
	I1129 09:16:31.657582       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.144.223"}
	I1129 09:16:43.760836       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:16:43.961333       1 controller.go:624] quota admission added evaluator for: endpoints
	I1129 09:16:44.011788       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fc21916bee97cf411bc0e0fecd6723e2e6882a5a2e9c27cf65544bc90cf2c965] <==
	I1129 09:16:44.071191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="313.585485ms"
	I1129 09:16:44.071334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.332µs"
	I1129 09:16:44.072965       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-mn66t"
	I1129 09:16:44.073084       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-8d8lv"
	I1129 09:16:44.081249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.113103ms"
	I1129 09:16:44.082474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="66.387366ms"
	I1129 09:16:44.088986       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.659818ms"
	I1129 09:16:44.089144       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="101.753µs"
	I1129 09:16:44.089223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="37.217µs"
	I1129 09:16:44.091206       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.678658ms"
	I1129 09:16:44.091332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="72.458µs"
	I1129 09:16:44.095514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="437.42µs"
	I1129 09:16:44.098597       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 09:16:44.098635       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1129 09:16:44.098643       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 09:16:44.108771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.458µs"
	I1129 09:16:47.336099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="86.719µs"
	I1129 09:16:48.340378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="242.458µs"
	I1129 09:16:49.358756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="116.663µs"
	I1129 09:16:50.359619       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.299923ms"
	I1129 09:16:50.359902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="78.318µs"
	I1129 09:17:09.405310       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.23µs"
	I1129 09:17:10.207861       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.366963ms"
	I1129 09:17:10.209296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.331µs"
	I1129 09:17:14.395715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.128µs"
	
	
	==> kube-proxy [e9a43784b2c72acefa4683955a09e9ac167529a849f27e92985305377c18378c] <==
	I1129 09:16:31.937701       1 server_others.go:69] "Using iptables proxy"
	I1129 09:16:31.949569       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1129 09:16:31.973084       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:16:31.976314       1 server_others.go:152] "Using iptables Proxier"
	I1129 09:16:31.976354       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1129 09:16:31.976361       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1129 09:16:31.976389       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1129 09:16:31.976678       1 server.go:846] "Version info" version="v1.28.0"
	I1129 09:16:31.976695       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:16:31.977363       1 config.go:188] "Starting service config controller"
	I1129 09:16:31.977391       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1129 09:16:31.977420       1 config.go:97] "Starting endpoint slice config controller"
	I1129 09:16:31.977424       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1129 09:16:31.977464       1 config.go:315] "Starting node config controller"
	I1129 09:16:31.977502       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1129 09:16:32.078322       1 shared_informer.go:318] Caches are synced for node config
	I1129 09:16:32.078332       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1129 09:16:32.078357       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [719922d67f4629eeb37ff02ef625a4a45934ecac6b66eb3b61978808b6a57fde] <==
	I1129 09:16:28.207098       1 serving.go:348] Generated self-signed cert in-memory
	W1129 09:16:30.598540       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 09:16:30.598581       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 09:16:30.598593       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 09:16:30.598601       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 09:16:30.635082       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1129 09:16:30.635116       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:16:30.638900       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:16:30.639171       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1129 09:16:30.640459       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1129 09:16:30.640562       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1129 09:16:30.739700       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 29 09:16:44 old-k8s-version-680646 kubelet[732]: I1129 09:16:44.082889     732 topology_manager.go:215] "Topology Admit Handler" podUID="2da13538-dddd-4c5d-81fd-6f823bb78493" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-8d8lv"
	Nov 29 09:16:44 old-k8s-version-680646 kubelet[732]: I1129 09:16:44.207295     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2da13538-dddd-4c5d-81fd-6f823bb78493-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-8d8lv\" (UID: \"2da13538-dddd-4c5d-81fd-6f823bb78493\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv"
	Nov 29 09:16:44 old-k8s-version-680646 kubelet[732]: I1129 09:16:44.207349     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsmmx\" (UniqueName: \"kubernetes.io/projected/f5d4707e-ce09-4732-98b6-607cdc8bd1ff-kube-api-access-tsmmx\") pod \"kubernetes-dashboard-8694d4445c-mn66t\" (UID: \"f5d4707e-ce09-4732-98b6-607cdc8bd1ff\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mn66t"
	Nov 29 09:16:44 old-k8s-version-680646 kubelet[732]: I1129 09:16:44.207383     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84q69\" (UniqueName: \"kubernetes.io/projected/2da13538-dddd-4c5d-81fd-6f823bb78493-kube-api-access-84q69\") pod \"dashboard-metrics-scraper-5f989dc9cf-8d8lv\" (UID: \"2da13538-dddd-4c5d-81fd-6f823bb78493\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv"
	Nov 29 09:16:44 old-k8s-version-680646 kubelet[732]: I1129 09:16:44.207414     732 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f5d4707e-ce09-4732-98b6-607cdc8bd1ff-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-mn66t\" (UID: \"f5d4707e-ce09-4732-98b6-607cdc8bd1ff\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mn66t"
	Nov 29 09:16:47 old-k8s-version-680646 kubelet[732]: I1129 09:16:47.319004     732 scope.go:117] "RemoveContainer" containerID="2aa287ae1de387f642913e851b14054905992058adde6eba7a11d78aea48d63a"
	Nov 29 09:16:48 old-k8s-version-680646 kubelet[732]: I1129 09:16:48.324176     732 scope.go:117] "RemoveContainer" containerID="2aa287ae1de387f642913e851b14054905992058adde6eba7a11d78aea48d63a"
	Nov 29 09:16:48 old-k8s-version-680646 kubelet[732]: I1129 09:16:48.324325     732 scope.go:117] "RemoveContainer" containerID="8ce55b594fee130d22c7869b97adc182b5a3fb462788336518e3e0129a8b1a65"
	Nov 29 09:16:48 old-k8s-version-680646 kubelet[732]: E1129 09:16:48.324743     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-8d8lv_kubernetes-dashboard(2da13538-dddd-4c5d-81fd-6f823bb78493)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv" podUID="2da13538-dddd-4c5d-81fd-6f823bb78493"
	Nov 29 09:16:49 old-k8s-version-680646 kubelet[732]: I1129 09:16:49.329554     732 scope.go:117] "RemoveContainer" containerID="8ce55b594fee130d22c7869b97adc182b5a3fb462788336518e3e0129a8b1a65"
	Nov 29 09:16:49 old-k8s-version-680646 kubelet[732]: E1129 09:16:49.329979     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-8d8lv_kubernetes-dashboard(2da13538-dddd-4c5d-81fd-6f823bb78493)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv" podUID="2da13538-dddd-4c5d-81fd-6f823bb78493"
	Nov 29 09:16:50 old-k8s-version-680646 kubelet[732]: I1129 09:16:50.350522     732 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mn66t" podStartSLOduration=0.806896352 podCreationTimestamp="2025-11-29 09:16:44 +0000 UTC" firstStartedPulling="2025-11-29 09:16:44.415668875 +0000 UTC m=+17.265995583" lastFinishedPulling="2025-11-29 09:16:49.959231741 +0000 UTC m=+22.809558454" observedRunningTime="2025-11-29 09:16:50.350172593 +0000 UTC m=+23.200499312" watchObservedRunningTime="2025-11-29 09:16:50.350459223 +0000 UTC m=+23.200785942"
	Nov 29 09:16:54 old-k8s-version-680646 kubelet[732]: I1129 09:16:54.385048     732 scope.go:117] "RemoveContainer" containerID="8ce55b594fee130d22c7869b97adc182b5a3fb462788336518e3e0129a8b1a65"
	Nov 29 09:16:54 old-k8s-version-680646 kubelet[732]: E1129 09:16:54.385318     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-8d8lv_kubernetes-dashboard(2da13538-dddd-4c5d-81fd-6f823bb78493)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv" podUID="2da13538-dddd-4c5d-81fd-6f823bb78493"
	Nov 29 09:17:02 old-k8s-version-680646 kubelet[732]: I1129 09:17:02.365623     732 scope.go:117] "RemoveContainer" containerID="4eeb9cde84ff02b79f96004411581e8503a4fc89f1155b4646dd015b41a654c5"
	Nov 29 09:17:09 old-k8s-version-680646 kubelet[732]: I1129 09:17:09.247447     732 scope.go:117] "RemoveContainer" containerID="8ce55b594fee130d22c7869b97adc182b5a3fb462788336518e3e0129a8b1a65"
	Nov 29 09:17:09 old-k8s-version-680646 kubelet[732]: I1129 09:17:09.385385     732 scope.go:117] "RemoveContainer" containerID="8ce55b594fee130d22c7869b97adc182b5a3fb462788336518e3e0129a8b1a65"
	Nov 29 09:17:09 old-k8s-version-680646 kubelet[732]: I1129 09:17:09.386186     732 scope.go:117] "RemoveContainer" containerID="e4c8a4234fb3187bd7599e962d6a85a954434a95064cb491b1e31a4ed482fd06"
	Nov 29 09:17:09 old-k8s-version-680646 kubelet[732]: E1129 09:17:09.387714     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-8d8lv_kubernetes-dashboard(2da13538-dddd-4c5d-81fd-6f823bb78493)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv" podUID="2da13538-dddd-4c5d-81fd-6f823bb78493"
	Nov 29 09:17:14 old-k8s-version-680646 kubelet[732]: I1129 09:17:14.384968     732 scope.go:117] "RemoveContainer" containerID="e4c8a4234fb3187bd7599e962d6a85a954434a95064cb491b1e31a4ed482fd06"
	Nov 29 09:17:14 old-k8s-version-680646 kubelet[732]: E1129 09:17:14.385357     732 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-8d8lv_kubernetes-dashboard(2da13538-dddd-4c5d-81fd-6f823bb78493)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-8d8lv" podUID="2da13538-dddd-4c5d-81fd-6f823bb78493"
	Nov 29 09:17:24 old-k8s-version-680646 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 09:17:24 old-k8s-version-680646 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 09:17:24 old-k8s-version-680646 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 29 09:17:24 old-k8s-version-680646 systemd[1]: kubelet.service: Consumed 1.664s CPU time.
	
	
	==> kubernetes-dashboard [61ce0b8ff133dd7770871455d52b8eb5571079a0a2609fadc954e3bec70465cd] <==
	2025/11/29 09:16:50 Starting overwatch
	2025/11/29 09:16:50 Using namespace: kubernetes-dashboard
	2025/11/29 09:16:50 Using in-cluster config to connect to apiserver
	2025/11/29 09:16:50 Using secret token for csrf signing
	2025/11/29 09:16:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 09:16:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 09:16:50 Successful initial request to the apiserver, version: v1.28.0
	2025/11/29 09:16:50 Generating JWE encryption key
	2025/11/29 09:16:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 09:16:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 09:16:50 Initializing JWE encryption key from synchronized object
	2025/11/29 09:16:50 Creating in-cluster Sidecar client
	2025/11/29 09:16:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 09:16:50 Serving insecurely on HTTP port: 9090
	2025/11/29 09:17:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4eeb9cde84ff02b79f96004411581e8503a4fc89f1155b4646dd015b41a654c5] <==
	I1129 09:16:31.890330       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 09:17:01.894279       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c5f2e76d762bbed4aac24938d20ed8ef6bc68a75c8faeace43ca72adfebaa06f] <==
	I1129 09:17:02.425744       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:17:02.435425       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:17:02.435572       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1129 09:17:19.840571       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:17:19.840928       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"20fa7a7e-0f59-4be3-9c60-c5917e942d20", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-680646_93874ddc-56de-474e-8750-1fb398bcee7e became leader
	I1129 09:17:19.840971       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-680646_93874ddc-56de-474e-8750-1fb398bcee7e!
	I1129 09:17:19.942063       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-680646_93874ddc-56de-474e-8750-1fb398bcee7e!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-680646 -n old-k8s-version-680646
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-680646 -n old-k8s-version-680646: exit status 2 (340.272691ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-680646 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-897274 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-897274 --alsologtostderr -v=1: exit status 80 (1.782486851s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-897274 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:17:31.998873  343610 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:17:31.999179  343610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:31.999189  343610 out.go:374] Setting ErrFile to fd 2...
	I1129 09:17:31.999193  343610 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:31.999401  343610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:17:31.999642  343610 out.go:368] Setting JSON to false
	I1129 09:17:31.999658  343610 mustload.go:66] Loading cluster: no-preload-897274
	I1129 09:17:31.999999  343610 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.000544  343610 cli_runner.go:164] Run: docker container inspect no-preload-897274 --format={{.State.Status}}
	I1129 09:17:32.019896  343610 host.go:66] Checking if "no-preload-897274" exists ...
	I1129 09:17:32.020190  343610 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:32.089108  343610 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-29 09:17:32.078324432 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:32.089801  343610 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-897274 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1129 09:17:32.099952  343610 out.go:179] * Pausing node no-preload-897274 ... 
	I1129 09:17:32.101937  343610 host.go:66] Checking if "no-preload-897274" exists ...
	I1129 09:17:32.102326  343610 ssh_runner.go:195] Run: systemctl --version
	I1129 09:17:32.102400  343610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-897274
	I1129 09:17:32.121932  343610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/no-preload-897274/id_rsa Username:docker}
	I1129 09:17:32.225545  343610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:17:32.239941  343610 pause.go:52] kubelet running: true
	I1129 09:17:32.240018  343610 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:17:32.422117  343610 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:17:32.422241  343610 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:17:32.498432  343610 cri.go:89] found id: "59647e07ee8a091171cec7b590acbc5f09666c37222fbab34c9303475c8dd562"
	I1129 09:17:32.498450  343610 cri.go:89] found id: "0c8d6d8a59c849da593bdc3e9048fb92e32d3eab72f152bc61fc3709ed0731db"
	I1129 09:17:32.498454  343610 cri.go:89] found id: "1e940917edcba9ed0c9cecbd2f8b6f46be2ada309804967c806a5001be24dc45"
	I1129 09:17:32.498457  343610 cri.go:89] found id: "519adcff5cf34b2b28b8394ca213b1c1f9c0f4a8d2d08dd5d4945135c6ed4a10"
	I1129 09:17:32.498460  343610 cri.go:89] found id: "373fd7f555c013460e8c02caadd4d3bd9483657ac34e29d424536bbb510f2532"
	I1129 09:17:32.498463  343610 cri.go:89] found id: "65cad02ba2a7910bcdcffb28c773b53da4d5023aecfd588deeacf22d8dca4a38"
	I1129 09:17:32.498466  343610 cri.go:89] found id: "ad66b46c591ebaf67ffea99e3f782c8b3c848d695dab97ba85d7b414cf4c3170"
	I1129 09:17:32.498468  343610 cri.go:89] found id: "652695edd3b368ed64211f7ee974fad1ce2be0ae46ac90c153b50e751c36007b"
	I1129 09:17:32.498471  343610 cri.go:89] found id: "aef2b06f8a8ac95a822e5865d9062a9500764f567fb042a1dbeda8630e6e5914"
	I1129 09:17:32.498476  343610 cri.go:89] found id: "4bf1ca4b85763fb7cd9174bcda6d47ff02461f77f69bd536caf0e9695c5bd144"
	I1129 09:17:32.498479  343610 cri.go:89] found id: "9cb3694a5dbd1f8de0fd09777a72d591a3bc36f97de400cddbcf1adb6df108e7"
	I1129 09:17:32.498481  343610 cri.go:89] found id: ""
	I1129 09:17:32.498521  343610 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:17:32.511635  343610 retry.go:31] will retry after 338.167331ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:32Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:17:32.849955  343610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:17:32.864068  343610 pause.go:52] kubelet running: false
	I1129 09:17:32.864120  343610 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:17:33.034872  343610 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:17:33.034992  343610 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:17:33.116029  343610 cri.go:89] found id: "59647e07ee8a091171cec7b590acbc5f09666c37222fbab34c9303475c8dd562"
	I1129 09:17:33.116052  343610 cri.go:89] found id: "0c8d6d8a59c849da593bdc3e9048fb92e32d3eab72f152bc61fc3709ed0731db"
	I1129 09:17:33.116058  343610 cri.go:89] found id: "1e940917edcba9ed0c9cecbd2f8b6f46be2ada309804967c806a5001be24dc45"
	I1129 09:17:33.116063  343610 cri.go:89] found id: "519adcff5cf34b2b28b8394ca213b1c1f9c0f4a8d2d08dd5d4945135c6ed4a10"
	I1129 09:17:33.116068  343610 cri.go:89] found id: "373fd7f555c013460e8c02caadd4d3bd9483657ac34e29d424536bbb510f2532"
	I1129 09:17:33.116073  343610 cri.go:89] found id: "65cad02ba2a7910bcdcffb28c773b53da4d5023aecfd588deeacf22d8dca4a38"
	I1129 09:17:33.116077  343610 cri.go:89] found id: "ad66b46c591ebaf67ffea99e3f782c8b3c848d695dab97ba85d7b414cf4c3170"
	I1129 09:17:33.116081  343610 cri.go:89] found id: "652695edd3b368ed64211f7ee974fad1ce2be0ae46ac90c153b50e751c36007b"
	I1129 09:17:33.116085  343610 cri.go:89] found id: "aef2b06f8a8ac95a822e5865d9062a9500764f567fb042a1dbeda8630e6e5914"
	I1129 09:17:33.116094  343610 cri.go:89] found id: "4bf1ca4b85763fb7cd9174bcda6d47ff02461f77f69bd536caf0e9695c5bd144"
	I1129 09:17:33.116099  343610 cri.go:89] found id: "9cb3694a5dbd1f8de0fd09777a72d591a3bc36f97de400cddbcf1adb6df108e7"
	I1129 09:17:33.116103  343610 cri.go:89] found id: ""
	I1129 09:17:33.116150  343610 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:17:33.130567  343610 retry.go:31] will retry after 281.009123ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:33Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:17:33.412091  343610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:17:33.427922  343610 pause.go:52] kubelet running: false
	I1129 09:17:33.427984  343610 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:17:33.606216  343610 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:17:33.606306  343610 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:17:33.686174  343610 cri.go:89] found id: "59647e07ee8a091171cec7b590acbc5f09666c37222fbab34c9303475c8dd562"
	I1129 09:17:33.686207  343610 cri.go:89] found id: "0c8d6d8a59c849da593bdc3e9048fb92e32d3eab72f152bc61fc3709ed0731db"
	I1129 09:17:33.686213  343610 cri.go:89] found id: "1e940917edcba9ed0c9cecbd2f8b6f46be2ada309804967c806a5001be24dc45"
	I1129 09:17:33.686218  343610 cri.go:89] found id: "519adcff5cf34b2b28b8394ca213b1c1f9c0f4a8d2d08dd5d4945135c6ed4a10"
	I1129 09:17:33.686223  343610 cri.go:89] found id: "373fd7f555c013460e8c02caadd4d3bd9483657ac34e29d424536bbb510f2532"
	I1129 09:17:33.686227  343610 cri.go:89] found id: "65cad02ba2a7910bcdcffb28c773b53da4d5023aecfd588deeacf22d8dca4a38"
	I1129 09:17:33.686230  343610 cri.go:89] found id: "ad66b46c591ebaf67ffea99e3f782c8b3c848d695dab97ba85d7b414cf4c3170"
	I1129 09:17:33.686233  343610 cri.go:89] found id: "652695edd3b368ed64211f7ee974fad1ce2be0ae46ac90c153b50e751c36007b"
	I1129 09:17:33.686244  343610 cri.go:89] found id: "aef2b06f8a8ac95a822e5865d9062a9500764f567fb042a1dbeda8630e6e5914"
	I1129 09:17:33.686256  343610 cri.go:89] found id: "4bf1ca4b85763fb7cd9174bcda6d47ff02461f77f69bd536caf0e9695c5bd144"
	I1129 09:17:33.686266  343610 cri.go:89] found id: "9cb3694a5dbd1f8de0fd09777a72d591a3bc36f97de400cddbcf1adb6df108e7"
	I1129 09:17:33.686271  343610 cri.go:89] found id: ""
	I1129 09:17:33.686319  343610 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:17:33.705626  343610 out.go:203] 
	W1129 09:17:33.706826  343610 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:17:33.706882  343610 out.go:285] * 
	* 
	W1129 09:17:33.713178  343610 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:17:33.714556  343610 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-897274 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-897274
helpers_test.go:243: (dbg) docker inspect no-preload-897274:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635",
	        "Created": "2025-11-29T09:15:12.796321744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 331397,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:16:33.380723056Z",
	            "FinishedAt": "2025-11-29T09:16:32.399933912Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635/hostname",
	        "HostsPath": "/var/lib/docker/containers/49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635/hosts",
	        "LogPath": "/var/lib/docker/containers/49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635/49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635-json.log",
	        "Name": "/no-preload-897274",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-897274:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-897274",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635",
	                "LowerDir": "/var/lib/docker/overlay2/2391079c7361fb7ef885c6e2d9f7292f958728db50719b04d13acb986145d951-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2391079c7361fb7ef885c6e2d9f7292f958728db50719b04d13acb986145d951/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2391079c7361fb7ef885c6e2d9f7292f958728db50719b04d13acb986145d951/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2391079c7361fb7ef885c6e2d9f7292f958728db50719b04d13acb986145d951/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-897274",
	                "Source": "/var/lib/docker/volumes/no-preload-897274/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-897274",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-897274",
	                "name.minikube.sigs.k8s.io": "no-preload-897274",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8edab6239d101f4307c17bd89bb6aa376ec5676da6e2cfbe1d59ed607b50e848",
	            "SandboxKey": "/var/run/docker/netns/8edab6239d10",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-897274": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d8f02c8f2b116aa0973a6466bb52331af9f99e5ba95f8e3241688d808e61a07a",
	                    "EndpointID": "4cf033600e55deff2ee2cf4cfd9c44a3a0bea1047264e0d47410ecc00e8e32e7",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "2a:83:e1:1b:75:14",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-897274",
	                        "49538363fc81"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-897274 -n no-preload-897274
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-897274 -n no-preload-897274: exit status 2 (385.308263ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-897274 logs -n 25
E1129 09:17:34.258170    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/auto-628644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-897274 logs -n 25: (1.284374778s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p bridge-628644                                                                                                                                                                                                                              │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p disable-driver-mounts-327778                                                                                                                                                                                                               │ disable-driver-mounts-327778 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-680646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p old-k8s-version-680646 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-897274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p no-preload-897274 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-680646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-680646 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p no-preload-897274 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-160987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-632243 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-160987 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ stop    │ -p default-k8s-diff-port-632243 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-160987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-632243 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ image   │ old-k8s-version-680646 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-680646 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ no-preload-897274 image list --format=json                                                                                                                                                                                                    │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-897274 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-020433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:17:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:17:32.750525  343912 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:17:32.750831  343912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:32.750854  343912 out.go:374] Setting ErrFile to fd 2...
	I1129 09:17:32.750859  343912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:32.751040  343912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:17:32.751569  343912 out.go:368] Setting JSON to false
	I1129 09:17:32.753086  343912 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3605,"bootTime":1764404248,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:17:32.753155  343912 start.go:143] virtualization: kvm guest
	I1129 09:17:32.755163  343912 out.go:179] * [newest-cni-020433] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:17:32.756656  343912 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:17:32.756692  343912 notify.go:221] Checking for updates...
	I1129 09:17:32.759425  343912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:17:32.760722  343912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:32.765362  343912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:17:32.766699  343912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:17:32.768011  343912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:17:32.769812  343912 config.go:182] Loaded profile config "default-k8s-diff-port-632243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.769952  343912 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.770081  343912 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.770208  343912 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:17:32.794655  343912 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:17:32.794775  343912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:32.856269  343912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 09:17:32.845151576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:32.856388  343912 docker.go:319] overlay module found
	I1129 09:17:32.858258  343912 out.go:179] * Using the docker driver based on user configuration
	I1129 09:17:32.859415  343912 start.go:309] selected driver: docker
	I1129 09:17:32.859434  343912 start.go:927] validating driver "docker" against <nil>
	I1129 09:17:32.859451  343912 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:17:32.860352  343912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:32.930751  343912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 09:17:32.91839311 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:32.930951  343912 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1129 09:17:32.930985  343912 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1129 09:17:32.931224  343912 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:17:32.933425  343912 out.go:179] * Using Docker driver with root privileges
	I1129 09:17:32.934824  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:32.934925  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:32.934944  343912 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:17:32.935044  343912 start.go:353] cluster config:
	{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:32.936354  343912 out.go:179] * Starting "newest-cni-020433" primary control-plane node in "newest-cni-020433" cluster
	I1129 09:17:32.937514  343912 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:17:32.938803  343912 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:17:32.940016  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:32.940051  343912 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:17:32.940062  343912 cache.go:65] Caching tarball of preloaded images
	I1129 09:17:32.940107  343912 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:17:32.940163  343912 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:17:32.940176  343912 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:17:32.940278  343912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:17:32.940301  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json: {Name:mk7d4da653b0e884b27837053cd3d354c3ff76e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:32.963727  343912 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:17:32.963754  343912 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:17:32.963777  343912 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:17:32.963830  343912 start.go:360] acquireMachinesLock for newest-cni-020433: {Name:mk6347901682a01c9d317c6a402722ce1e16792e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:17:32.963998  343912 start.go:364] duration metric: took 95.455µs to acquireMachinesLock for "newest-cni-020433"
	I1129 09:17:32.964029  343912 start.go:93] Provisioning new machine with config: &{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:17:32.964128  343912 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 29 09:16:53 no-preload-897274 crio[567]: time="2025-11-29T09:16:53.626509957Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 09:16:53 no-preload-897274 crio[567]: time="2025-11-29T09:16:53.847923759Z" level=info msg="Removing container: 669b31d641acb4f2d2c1bf42a4868c6e1bcfdf66c443ca1dafc4ef1fed587d25" id=c3d29e4f-3535-4569-bd87-3d620ca6f600 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:16:53 no-preload-897274 crio[567]: time="2025-11-29T09:16:53.85763454Z" level=info msg="Removed container 669b31d641acb4f2d2c1bf42a4868c6e1bcfdf66c443ca1dafc4ef1fed587d25: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998/dashboard-metrics-scraper" id=c3d29e4f-3535-4569-bd87-3d620ca6f600 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.907609144Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f73d4cba-ce99-4ac7-b2cb-e471b3914270 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.908615747Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=57b53b42-7fcc-45aa-9296-a1ac21524fc8 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.909739957Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ace1841e-7106-4717-990f-f71e95e7aa5e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.909906762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.914054964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.914274102Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/63f925b16e94d7112715160fea428a2c9f628d440e086b9b6d157e663731d8c6/merged/etc/passwd: no such file or directory"
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.91431116Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/63f925b16e94d7112715160fea428a2c9f628d440e086b9b6d157e663731d8c6/merged/etc/group: no such file or directory"
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.914636795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.939509353Z" level=info msg="Created container 59647e07ee8a091171cec7b590acbc5f09666c37222fbab34c9303475c8dd562: kube-system/storage-provisioner/storage-provisioner" id=ace1841e-7106-4717-990f-f71e95e7aa5e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.940189743Z" level=info msg="Starting container: 59647e07ee8a091171cec7b590acbc5f09666c37222fbab34c9303475c8dd562" id=923c222c-6bf8-4bdd-9c48-71ddd1bf7493 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.942362254Z" level=info msg="Started container" PID=1767 containerID=59647e07ee8a091171cec7b590acbc5f09666c37222fbab34c9303475c8dd562 description=kube-system/storage-provisioner/storage-provisioner id=923c222c-6bf8-4bdd-9c48-71ddd1bf7493 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5475289b6f8558922e141fe7087f060c73c323100165bb3649cd389f4e6220a4
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.769818839Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a03fc0f4-adce-4634-af16-f9c6b103d2e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.77273848Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8491fd38-8114-4263-8cb1-5fb5f31147b6 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.773988972Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998/dashboard-metrics-scraper" id=1f906f68-1f9c-499c-8dd6-d2c2076dba7c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.774138977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.779578326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.780263523Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.825507561Z" level=info msg="Created container 4bf1ca4b85763fb7cd9174bcda6d47ff02461f77f69bd536caf0e9695c5bd144: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998/dashboard-metrics-scraper" id=1f906f68-1f9c-499c-8dd6-d2c2076dba7c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.826307782Z" level=info msg="Starting container: 4bf1ca4b85763fb7cd9174bcda6d47ff02461f77f69bd536caf0e9695c5bd144" id=b1c892ac-70a0-4da6-8471-e3fb2c9a02ed name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.828700091Z" level=info msg="Started container" PID=1783 containerID=4bf1ca4b85763fb7cd9174bcda6d47ff02461f77f69bd536caf0e9695c5bd144 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998/dashboard-metrics-scraper id=b1c892ac-70a0-4da6-8471-e3fb2c9a02ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=201141d085a345342d342b3f232dbe1985c0f53cca68b3bac909d1be3bb7a6cc
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.91844564Z" level=info msg="Removing container: 0ddd695d6cbfcceee3a6c665c9274415590bfd179949ffd9e9b055d76bd9acc4" id=149c8517-9a31-4e5c-af77-069eba7647d6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.935145803Z" level=info msg="Removed container 0ddd695d6cbfcceee3a6c665c9274415590bfd179949ffd9e9b055d76bd9acc4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998/dashboard-metrics-scraper" id=149c8517-9a31-4e5c-af77-069eba7647d6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	4bf1ca4b85763       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   201141d085a34       dashboard-metrics-scraper-6ffb444bf9-5x998   kubernetes-dashboard
	59647e07ee8a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   5475289b6f855       storage-provisioner                          kube-system
	9cb3694a5dbd1       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   0c74399ea54ea       kubernetes-dashboard-855c9754f9-6fjrq        kubernetes-dashboard
	0c8d6d8a59c84       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   08f8a4759f81b       coredns-66bc5c9577-85hh2                     kube-system
	1e940917edcba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   5475289b6f855       storage-provisioner                          kube-system
	18110d07bbdb3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   16f91e78e4b5c       busybox                                      default
	519adcff5cf34       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   2e30708fe0643       kindnet-jbmcv                                kube-system
	373fd7f555c01       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   d48d6e77ace84       kube-proxy-h9zhz                             kube-system
	65cad02ba2a79       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   6ffa286fa8876       kube-controller-manager-no-preload-897274    kube-system
	ad66b46c591eb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   ce65afa8dd94e       etcd-no-preload-897274                       kube-system
	652695edd3b36       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   3cb56f6eeac95       kube-apiserver-no-preload-897274             kube-system
	aef2b06f8a8ac       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   c7bae4bbe4b06       kube-scheduler-no-preload-897274             kube-system
	
	
	==> coredns [0c8d6d8a59c849da593bdc3e9048fb92e32d3eab72f152bc61fc3709ed0731db] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55476 - 11879 "HINFO IN 4741727914574649139.3026824069202930127. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01756231s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-897274
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-897274
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=no-preload-897274
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_15_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:15:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-897274
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:17:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:17:13 +0000   Sat, 29 Nov 2025 09:15:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:17:13 +0000   Sat, 29 Nov 2025 09:15:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:17:13 +0000   Sat, 29 Nov 2025 09:15:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:17:13 +0000   Sat, 29 Nov 2025 09:16:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-897274
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                fc2d6958-d45c-48d6-8525-65c7170610ae
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-85hh2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-no-preload-897274                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-jbmcv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-no-preload-897274              250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-no-preload-897274     200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-h9zhz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-no-preload-897274              100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-5x998    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6fjrq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  109s               kubelet          Node no-preload-897274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s               kubelet          Node no-preload-897274 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s               kubelet          Node no-preload-897274 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           105s               node-controller  Node no-preload-897274 event: Registered Node no-preload-897274 in Controller
	  Normal  NodeReady                91s                kubelet          Node no-preload-897274 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node no-preload-897274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node no-preload-897274 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node no-preload-897274 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node no-preload-897274 event: Registered Node no-preload-897274 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 e1 87 e4 84 ae 08 06
	[  +0.002640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[ +14.778105] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +13.520989] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 32 aa eb 52 77 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +14.762906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df ae 93 da f6 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[Nov29 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[ +13.297626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	[  +7.390906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 68 f3 3d 3c e9 08 06
	[  +0.000436] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[  +7.145922] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea c3 2f 2a a3 01 08 06
	[  +0.000415] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	
	
	==> etcd [ad66b46c591ebaf67ffea99e3f782c8b3c848d695dab97ba85d7b414cf4c3170] <==
	{"level":"warn","ts":"2025-11-29T09:16:41.586608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.594679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.610231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.617216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.624292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.631561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.639367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.646464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.652633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.659604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.672089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.678898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.685621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.693011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.706609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.714393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.721512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.730519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.738156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.746527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.769122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.773045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.779805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.787161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.848341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41210","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:17:35 up  1:00,  0 user,  load average: 3.49, 3.80, 2.49
	Linux no-preload-897274 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [519adcff5cf34b2b28b8394ca213b1c1f9c0f4a8d2d08dd5d4945135c6ed4a10] <==
	I1129 09:16:43.398119       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:16:43.398448       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1129 09:16:43.398659       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:16:43.398680       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:16:43.398705       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:16:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:16:43.603048       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:16:43.603125       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:16:43.603137       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:16:43.603404       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:16:43.994060       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:16:43.994188       1 metrics.go:72] Registering metrics
	I1129 09:16:43.994342       1 controller.go:711] "Syncing nftables rules"
	I1129 09:16:53.603086       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1129 09:16:53.603169       1 main.go:301] handling current node
	I1129 09:17:03.606542       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1129 09:17:03.606579       1 main.go:301] handling current node
	I1129 09:17:13.603593       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1129 09:17:13.603629       1 main.go:301] handling current node
	I1129 09:17:23.602989       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1129 09:17:23.603029       1 main.go:301] handling current node
	I1129 09:17:33.611958       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1129 09:17:33.611996       1 main.go:301] handling current node
	
	
	==> kube-apiserver [652695edd3b368ed64211f7ee974fad1ce2be0ae46ac90c153b50e751c36007b] <==
	I1129 09:16:42.357939       1 cache.go:39] Caches are synced for autoregister controller
	I1129 09:16:42.358666       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 09:16:42.360110       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 09:16:42.360308       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1129 09:16:42.360332       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1129 09:16:42.360249       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 09:16:42.361102       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 09:16:42.361037       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1129 09:16:42.368047       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 09:16:42.370195       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 09:16:42.380615       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 09:16:42.391048       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:16:42.697251       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:16:42.759482       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:16:42.814615       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:16:42.830813       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:16:42.841008       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:16:42.911936       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.18.197"}
	I1129 09:16:42.943592       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.254.239"}
	I1129 09:16:43.262199       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:16:45.678790       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:16:45.678864       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:16:46.029210       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:16:46.228905       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:16:46.228905       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [65cad02ba2a7910bcdcffb28c773b53da4d5023aecfd588deeacf22d8dca4a38] <==
	I1129 09:16:45.698398       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 09:16:45.701511       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1129 09:16:45.704979       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:16:45.706490       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:16:45.707554       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 09:16:45.710170       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 09:16:45.713981       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:16:45.717283       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 09:16:45.720726       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 09:16:45.720742       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:16:45.721905       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:16:45.721994       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 09:16:45.724488       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:16:45.724874       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:16:45.724898       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 09:16:45.725273       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 09:16:45.725425       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 09:16:45.726749       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:16:45.726871       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:16:45.726876       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:16:45.726972       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:16:45.726960       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-897274"
	I1129 09:16:45.727041       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1129 09:16:45.730859       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:16:45.735125       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [373fd7f555c013460e8c02caadd4d3bd9483657ac34e29d424536bbb510f2532] <==
	I1129 09:16:43.195861       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:16:43.268719       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:16:43.368975       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:16:43.369097       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1129 09:16:43.369271       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:16:43.392599       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:16:43.392677       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:16:43.399482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:16:43.399917       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:16:43.399941       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:16:43.401313       1 config.go:200] "Starting service config controller"
	I1129 09:16:43.401448       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:16:43.401455       1 config.go:309] "Starting node config controller"
	I1129 09:16:43.401468       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:16:43.401674       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:16:43.402394       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:16:43.401692       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:16:43.402421       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:16:43.501833       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:16:43.501871       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:16:43.503009       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:16:43.503014       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [aef2b06f8a8ac95a822e5865d9062a9500764f567fb042a1dbeda8630e6e5914] <==
	I1129 09:16:40.833131       1 serving.go:386] Generated self-signed cert in-memory
	I1129 09:16:42.331291       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 09:16:42.331325       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:16:42.339688       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1129 09:16:42.339719       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:16:42.339719       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:16:42.339743       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:16:42.339745       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:16:42.339730       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1129 09:16:42.340144       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:16:42.340596       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 09:16:42.439932       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:16:42.440121       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:16:42.440459       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 29 09:16:46 no-preload-897274 kubelet[723]: I1129 09:16:46.427422     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/66ea79bb-5692-472d-947c-7f67b687560c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-6fjrq\" (UID: \"66ea79bb-5692-472d-947c-7f67b687560c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6fjrq"
	Nov 29 09:16:46 no-preload-897274 kubelet[723]: I1129 09:16:46.427478     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v45c8\" (UniqueName: \"kubernetes.io/projected/66ea79bb-5692-472d-947c-7f67b687560c-kube-api-access-v45c8\") pod \"kubernetes-dashboard-855c9754f9-6fjrq\" (UID: \"66ea79bb-5692-472d-947c-7f67b687560c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6fjrq"
	Nov 29 09:16:46 no-preload-897274 kubelet[723]: I1129 09:16:46.427512     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/646dce54-6e5c-4117-ab60-2d56f76f16a1-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-5x998\" (UID: \"646dce54-6e5c-4117-ab60-2d56f76f16a1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998"
	Nov 29 09:16:46 no-preload-897274 kubelet[723]: I1129 09:16:46.427605     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf4nh\" (UniqueName: \"kubernetes.io/projected/646dce54-6e5c-4117-ab60-2d56f76f16a1-kube-api-access-gf4nh\") pod \"dashboard-metrics-scraper-6ffb444bf9-5x998\" (UID: \"646dce54-6e5c-4117-ab60-2d56f76f16a1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998"
	Nov 29 09:16:48 no-preload-897274 kubelet[723]: I1129 09:16:48.466529     723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 29 09:16:51 no-preload-897274 kubelet[723]: I1129 09:16:51.100551     723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6fjrq" podStartSLOduration=1.724284953 podStartE2EDuration="5.100523592s" podCreationTimestamp="2025-11-29 09:16:46 +0000 UTC" firstStartedPulling="2025-11-29 09:16:46.63075521 +0000 UTC m=+6.956028773" lastFinishedPulling="2025-11-29 09:16:50.006993851 +0000 UTC m=+10.332267412" observedRunningTime="2025-11-29 09:16:50.904256433 +0000 UTC m=+11.229530005" watchObservedRunningTime="2025-11-29 09:16:51.100523592 +0000 UTC m=+11.425797176"
	Nov 29 09:16:52 no-preload-897274 kubelet[723]: I1129 09:16:52.842732     723 scope.go:117] "RemoveContainer" containerID="669b31d641acb4f2d2c1bf42a4868c6e1bcfdf66c443ca1dafc4ef1fed587d25"
	Nov 29 09:16:53 no-preload-897274 kubelet[723]: I1129 09:16:53.846550     723 scope.go:117] "RemoveContainer" containerID="669b31d641acb4f2d2c1bf42a4868c6e1bcfdf66c443ca1dafc4ef1fed587d25"
	Nov 29 09:16:53 no-preload-897274 kubelet[723]: I1129 09:16:53.846709     723 scope.go:117] "RemoveContainer" containerID="0ddd695d6cbfcceee3a6c665c9274415590bfd179949ffd9e9b055d76bd9acc4"
	Nov 29 09:16:53 no-preload-897274 kubelet[723]: E1129 09:16:53.846913     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5x998_kubernetes-dashboard(646dce54-6e5c-4117-ab60-2d56f76f16a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998" podUID="646dce54-6e5c-4117-ab60-2d56f76f16a1"
	Nov 29 09:16:54 no-preload-897274 kubelet[723]: I1129 09:16:54.850893     723 scope.go:117] "RemoveContainer" containerID="0ddd695d6cbfcceee3a6c665c9274415590bfd179949ffd9e9b055d76bd9acc4"
	Nov 29 09:16:54 no-preload-897274 kubelet[723]: E1129 09:16:54.851060     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5x998_kubernetes-dashboard(646dce54-6e5c-4117-ab60-2d56f76f16a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998" podUID="646dce54-6e5c-4117-ab60-2d56f76f16a1"
	Nov 29 09:17:02 no-preload-897274 kubelet[723]: I1129 09:17:02.629129     723 scope.go:117] "RemoveContainer" containerID="0ddd695d6cbfcceee3a6c665c9274415590bfd179949ffd9e9b055d76bd9acc4"
	Nov 29 09:17:02 no-preload-897274 kubelet[723]: E1129 09:17:02.629386     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5x998_kubernetes-dashboard(646dce54-6e5c-4117-ab60-2d56f76f16a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998" podUID="646dce54-6e5c-4117-ab60-2d56f76f16a1"
	Nov 29 09:17:13 no-preload-897274 kubelet[723]: I1129 09:17:13.907219     723 scope.go:117] "RemoveContainer" containerID="1e940917edcba9ed0c9cecbd2f8b6f46be2ada309804967c806a5001be24dc45"
	Nov 29 09:17:15 no-preload-897274 kubelet[723]: I1129 09:17:15.769095     723 scope.go:117] "RemoveContainer" containerID="0ddd695d6cbfcceee3a6c665c9274415590bfd179949ffd9e9b055d76bd9acc4"
	Nov 29 09:17:15 no-preload-897274 kubelet[723]: I1129 09:17:15.917043     723 scope.go:117] "RemoveContainer" containerID="0ddd695d6cbfcceee3a6c665c9274415590bfd179949ffd9e9b055d76bd9acc4"
	Nov 29 09:17:15 no-preload-897274 kubelet[723]: I1129 09:17:15.917265     723 scope.go:117] "RemoveContainer" containerID="4bf1ca4b85763fb7cd9174bcda6d47ff02461f77f69bd536caf0e9695c5bd144"
	Nov 29 09:17:15 no-preload-897274 kubelet[723]: E1129 09:17:15.917490     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5x998_kubernetes-dashboard(646dce54-6e5c-4117-ab60-2d56f76f16a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998" podUID="646dce54-6e5c-4117-ab60-2d56f76f16a1"
	Nov 29 09:17:22 no-preload-897274 kubelet[723]: I1129 09:17:22.629485     723 scope.go:117] "RemoveContainer" containerID="4bf1ca4b85763fb7cd9174bcda6d47ff02461f77f69bd536caf0e9695c5bd144"
	Nov 29 09:17:22 no-preload-897274 kubelet[723]: E1129 09:17:22.629661     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5x998_kubernetes-dashboard(646dce54-6e5c-4117-ab60-2d56f76f16a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998" podUID="646dce54-6e5c-4117-ab60-2d56f76f16a1"
	Nov 29 09:17:32 no-preload-897274 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 09:17:32 no-preload-897274 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 09:17:32 no-preload-897274 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 29 09:17:32 no-preload-897274 systemd[1]: kubelet.service: Consumed 1.759s CPU time.
	
	
	==> kubernetes-dashboard [9cb3694a5dbd1f8de0fd09777a72d591a3bc36f97de400cddbcf1adb6df108e7] <==
	2025/11/29 09:16:50 Starting overwatch
	2025/11/29 09:16:50 Using namespace: kubernetes-dashboard
	2025/11/29 09:16:50 Using in-cluster config to connect to apiserver
	2025/11/29 09:16:50 Using secret token for csrf signing
	2025/11/29 09:16:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 09:16:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 09:16:50 Successful initial request to the apiserver, version: v1.34.1
	2025/11/29 09:16:50 Generating JWE encryption key
	2025/11/29 09:16:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 09:16:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 09:16:50 Initializing JWE encryption key from synchronized object
	2025/11/29 09:16:50 Creating in-cluster Sidecar client
	2025/11/29 09:16:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 09:16:50 Serving insecurely on HTTP port: 9090
	2025/11/29 09:17:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1e940917edcba9ed0c9cecbd2f8b6f46be2ada309804967c806a5001be24dc45] <==
	I1129 09:16:43.166091       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 09:17:13.168032       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [59647e07ee8a091171cec7b590acbc5f09666c37222fbab34c9303475c8dd562] <==
	I1129 09:17:13.956368       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:17:13.964085       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:17:13.964137       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:17:13.966510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:17.422060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:21.682405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:25.280549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:28.334095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:31.356151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:31.415619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:17:31.415777       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:17:31.415930       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"effd3485-4df8-4871-84ed-37c153135089", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-897274_1199971e-b124-4727-bcfa-d8547fb697b4 became leader
	I1129 09:17:31.416005       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-897274_1199971e-b124-4727-bcfa-d8547fb697b4!
	W1129 09:17:31.418317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:31.434420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:17:31.516704       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-897274_1199971e-b124-4727-bcfa-d8547fb697b4!
	W1129 09:17:33.438502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:33.443811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-897274 -n no-preload-897274
E1129 09:17:35.539554    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/auto-628644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-897274 -n no-preload-897274: exit status 2 (387.409087ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-897274 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-897274
helpers_test.go:243: (dbg) docker inspect no-preload-897274:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635",
	        "Created": "2025-11-29T09:15:12.796321744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 331397,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:16:33.380723056Z",
	            "FinishedAt": "2025-11-29T09:16:32.399933912Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635/hostname",
	        "HostsPath": "/var/lib/docker/containers/49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635/hosts",
	        "LogPath": "/var/lib/docker/containers/49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635/49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635-json.log",
	        "Name": "/no-preload-897274",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-897274:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-897274",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "49538363fc817b09e95a0d5dde7389da842341d069cb9ca46414b1fb3fc4d635",
	                "LowerDir": "/var/lib/docker/overlay2/2391079c7361fb7ef885c6e2d9f7292f958728db50719b04d13acb986145d951-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2391079c7361fb7ef885c6e2d9f7292f958728db50719b04d13acb986145d951/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2391079c7361fb7ef885c6e2d9f7292f958728db50719b04d13acb986145d951/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2391079c7361fb7ef885c6e2d9f7292f958728db50719b04d13acb986145d951/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-897274",
	                "Source": "/var/lib/docker/volumes/no-preload-897274/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-897274",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-897274",
	                "name.minikube.sigs.k8s.io": "no-preload-897274",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8edab6239d101f4307c17bd89bb6aa376ec5676da6e2cfbe1d59ed607b50e848",
	            "SandboxKey": "/var/run/docker/netns/8edab6239d10",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-897274": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d8f02c8f2b116aa0973a6466bb52331af9f99e5ba95f8e3241688d808e61a07a",
	                    "EndpointID": "4cf033600e55deff2ee2cf4cfd9c44a3a0bea1047264e0d47410ecc00e8e32e7",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "2a:83:e1:1b:75:14",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-897274",
	                        "49538363fc81"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-897274 -n no-preload-897274
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-897274 -n no-preload-897274: exit status 2 (376.617402ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-897274 logs -n 25
E1129 09:17:38.101269    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/auto-628644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-897274 logs -n 25: (2.871449599s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p bridge-628644                                                                                                                                                                                                                              │ bridge-628644                │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ delete  │ -p disable-driver-mounts-327778                                                                                                                                                                                                               │ disable-driver-mounts-327778 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:15 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:15 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-680646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p old-k8s-version-680646 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-897274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p no-preload-897274 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-680646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-680646 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p no-preload-897274 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-160987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-632243 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-160987 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ stop    │ -p default-k8s-diff-port-632243 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-160987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-632243 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ image   │ old-k8s-version-680646 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-680646 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ no-preload-897274 image list --format=json                                                                                                                                                                                                    │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-897274 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-020433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:17:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:17:32.750525  343912 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:17:32.750831  343912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:32.750854  343912 out.go:374] Setting ErrFile to fd 2...
	I1129 09:17:32.750859  343912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:32.751040  343912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:17:32.751569  343912 out.go:368] Setting JSON to false
	I1129 09:17:32.753086  343912 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3605,"bootTime":1764404248,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:17:32.753155  343912 start.go:143] virtualization: kvm guest
	I1129 09:17:32.755163  343912 out.go:179] * [newest-cni-020433] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:17:32.756656  343912 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:17:32.756692  343912 notify.go:221] Checking for updates...
	I1129 09:17:32.759425  343912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:17:32.760722  343912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:32.765362  343912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:17:32.766699  343912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:17:32.768011  343912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:17:32.769812  343912 config.go:182] Loaded profile config "default-k8s-diff-port-632243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.769952  343912 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.770081  343912 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.770208  343912 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:17:32.794655  343912 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:17:32.794775  343912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:32.856269  343912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 09:17:32.845151576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:32.856388  343912 docker.go:319] overlay module found
	I1129 09:17:32.858258  343912 out.go:179] * Using the docker driver based on user configuration
	I1129 09:17:32.859415  343912 start.go:309] selected driver: docker
	I1129 09:17:32.859434  343912 start.go:927] validating driver "docker" against <nil>
	I1129 09:17:32.859451  343912 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:17:32.860352  343912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:32.930751  343912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 09:17:32.91839311 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:32.930951  343912 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1129 09:17:32.930985  343912 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1129 09:17:32.931224  343912 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:17:32.933425  343912 out.go:179] * Using Docker driver with root privileges
	I1129 09:17:32.934824  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:32.934925  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:32.934944  343912 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:17:32.935044  343912 start.go:353] cluster config:
	{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:32.936354  343912 out.go:179] * Starting "newest-cni-020433" primary control-plane node in "newest-cni-020433" cluster
	I1129 09:17:32.937514  343912 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:17:32.938803  343912 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:17:32.940016  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:32.940051  343912 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:17:32.940062  343912 cache.go:65] Caching tarball of preloaded images
	I1129 09:17:32.940107  343912 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:17:32.940163  343912 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:17:32.940176  343912 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:17:32.940278  343912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:17:32.940301  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json: {Name:mk7d4da653b0e884b27837053cd3d354c3ff76e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:32.963727  343912 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:17:32.963754  343912 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:17:32.963777  343912 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:17:32.963830  343912 start.go:360] acquireMachinesLock for newest-cni-020433: {Name:mk6347901682a01c9d317c6a402722ce1e16792e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:17:32.963998  343912 start.go:364] duration metric: took 95.455µs to acquireMachinesLock for "newest-cni-020433"
	I1129 09:17:32.964029  343912 start.go:93] Provisioning new machine with config: &{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:17:32.964128  343912 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 29 09:16:53 no-preload-897274 crio[567]: time="2025-11-29T09:16:53.626509957Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 29 09:16:53 no-preload-897274 crio[567]: time="2025-11-29T09:16:53.847923759Z" level=info msg="Removing container: 669b31d641acb4f2d2c1bf42a4868c6e1bcfdf66c443ca1dafc4ef1fed587d25" id=c3d29e4f-3535-4569-bd87-3d620ca6f600 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:16:53 no-preload-897274 crio[567]: time="2025-11-29T09:16:53.85763454Z" level=info msg="Removed container 669b31d641acb4f2d2c1bf42a4868c6e1bcfdf66c443ca1dafc4ef1fed587d25: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998/dashboard-metrics-scraper" id=c3d29e4f-3535-4569-bd87-3d620ca6f600 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.907609144Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f73d4cba-ce99-4ac7-b2cb-e471b3914270 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.908615747Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=57b53b42-7fcc-45aa-9296-a1ac21524fc8 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.909739957Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ace1841e-7106-4717-990f-f71e95e7aa5e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.909906762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.914054964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.914274102Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/63f925b16e94d7112715160fea428a2c9f628d440e086b9b6d157e663731d8c6/merged/etc/passwd: no such file or directory"
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.91431116Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/63f925b16e94d7112715160fea428a2c9f628d440e086b9b6d157e663731d8c6/merged/etc/group: no such file or directory"
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.914636795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.939509353Z" level=info msg="Created container 59647e07ee8a091171cec7b590acbc5f09666c37222fbab34c9303475c8dd562: kube-system/storage-provisioner/storage-provisioner" id=ace1841e-7106-4717-990f-f71e95e7aa5e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.940189743Z" level=info msg="Starting container: 59647e07ee8a091171cec7b590acbc5f09666c37222fbab34c9303475c8dd562" id=923c222c-6bf8-4bdd-9c48-71ddd1bf7493 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:13 no-preload-897274 crio[567]: time="2025-11-29T09:17:13.942362254Z" level=info msg="Started container" PID=1767 containerID=59647e07ee8a091171cec7b590acbc5f09666c37222fbab34c9303475c8dd562 description=kube-system/storage-provisioner/storage-provisioner id=923c222c-6bf8-4bdd-9c48-71ddd1bf7493 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5475289b6f8558922e141fe7087f060c73c323100165bb3649cd389f4e6220a4
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.769818839Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a03fc0f4-adce-4634-af16-f9c6b103d2e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.77273848Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8491fd38-8114-4263-8cb1-5fb5f31147b6 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.773988972Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998/dashboard-metrics-scraper" id=1f906f68-1f9c-499c-8dd6-d2c2076dba7c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.774138977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.779578326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.780263523Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.825507561Z" level=info msg="Created container 4bf1ca4b85763fb7cd9174bcda6d47ff02461f77f69bd536caf0e9695c5bd144: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998/dashboard-metrics-scraper" id=1f906f68-1f9c-499c-8dd6-d2c2076dba7c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.826307782Z" level=info msg="Starting container: 4bf1ca4b85763fb7cd9174bcda6d47ff02461f77f69bd536caf0e9695c5bd144" id=b1c892ac-70a0-4da6-8471-e3fb2c9a02ed name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.828700091Z" level=info msg="Started container" PID=1783 containerID=4bf1ca4b85763fb7cd9174bcda6d47ff02461f77f69bd536caf0e9695c5bd144 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998/dashboard-metrics-scraper id=b1c892ac-70a0-4da6-8471-e3fb2c9a02ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=201141d085a345342d342b3f232dbe1985c0f53cca68b3bac909d1be3bb7a6cc
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.91844564Z" level=info msg="Removing container: 0ddd695d6cbfcceee3a6c665c9274415590bfd179949ffd9e9b055d76bd9acc4" id=149c8517-9a31-4e5c-af77-069eba7647d6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:15 no-preload-897274 crio[567]: time="2025-11-29T09:17:15.935145803Z" level=info msg="Removed container 0ddd695d6cbfcceee3a6c665c9274415590bfd179949ffd9e9b055d76bd9acc4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998/dashboard-metrics-scraper" id=149c8517-9a31-4e5c-af77-069eba7647d6 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	4bf1ca4b85763       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   201141d085a34       dashboard-metrics-scraper-6ffb444bf9-5x998   kubernetes-dashboard
	59647e07ee8a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   5475289b6f855       storage-provisioner                          kube-system
	9cb3694a5dbd1       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   0c74399ea54ea       kubernetes-dashboard-855c9754f9-6fjrq        kubernetes-dashboard
	0c8d6d8a59c84       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   08f8a4759f81b       coredns-66bc5c9577-85hh2                     kube-system
	1e940917edcba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   5475289b6f855       storage-provisioner                          kube-system
	18110d07bbdb3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   16f91e78e4b5c       busybox                                      default
	519adcff5cf34       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   2e30708fe0643       kindnet-jbmcv                                kube-system
	373fd7f555c01       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   d48d6e77ace84       kube-proxy-h9zhz                             kube-system
	65cad02ba2a79       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   6ffa286fa8876       kube-controller-manager-no-preload-897274    kube-system
	ad66b46c591eb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   ce65afa8dd94e       etcd-no-preload-897274                       kube-system
	652695edd3b36       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   3cb56f6eeac95       kube-apiserver-no-preload-897274             kube-system
	aef2b06f8a8ac       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   c7bae4bbe4b06       kube-scheduler-no-preload-897274             kube-system
	
	
	==> coredns [0c8d6d8a59c849da593bdc3e9048fb92e32d3eab72f152bc61fc3709ed0731db] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55476 - 11879 "HINFO IN 4741727914574649139.3026824069202930127. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01756231s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-897274
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-897274
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=no-preload-897274
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_15_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:15:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-897274
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:17:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:17:13 +0000   Sat, 29 Nov 2025 09:15:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:17:13 +0000   Sat, 29 Nov 2025 09:15:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:17:13 +0000   Sat, 29 Nov 2025 09:15:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:17:13 +0000   Sat, 29 Nov 2025 09:16:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-897274
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                fc2d6958-d45c-48d6-8525-65c7170610ae
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-85hh2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-no-preload-897274                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-jbmcv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-897274              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-897274     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-h9zhz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-897274              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-5x998    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6fjrq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node no-preload-897274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node no-preload-897274 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node no-preload-897274 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           109s               node-controller  Node no-preload-897274 event: Registered Node no-preload-897274 in Controller
	  Normal  NodeReady                95s                kubelet          Node no-preload-897274 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node no-preload-897274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node no-preload-897274 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node no-preload-897274 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node no-preload-897274 event: Registered Node no-preload-897274 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 e1 87 e4 84 ae 08 06
	[  +0.002640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[ +14.778105] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +13.520989] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 32 aa eb 52 77 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +14.762906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df ae 93 da f6 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[Nov29 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[ +13.297626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	[  +7.390906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 68 f3 3d 3c e9 08 06
	[  +0.000436] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[  +7.145922] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea c3 2f 2a a3 01 08 06
	[  +0.000415] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	
	
	==> etcd [ad66b46c591ebaf67ffea99e3f782c8b3c848d695dab97ba85d7b414cf4c3170] <==
	{"level":"warn","ts":"2025-11-29T09:16:41.586608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.594679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.610231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.617216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.624292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.631561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.639367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.646464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.652633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.659604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.672089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.678898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.685621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.693011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.706609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.714393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.721512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.730519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.738156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.746527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.769122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.773045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.779805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.787161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:16:41.848341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41210","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:17:38 up  1:00,  0 user,  load average: 3.49, 3.80, 2.49
	Linux no-preload-897274 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [519adcff5cf34b2b28b8394ca213b1c1f9c0f4a8d2d08dd5d4945135c6ed4a10] <==
	I1129 09:16:43.398119       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:16:43.398448       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1129 09:16:43.398659       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:16:43.398680       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:16:43.398705       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:16:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:16:43.603048       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:16:43.603125       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:16:43.603137       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:16:43.603404       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:16:43.994060       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:16:43.994188       1 metrics.go:72] Registering metrics
	I1129 09:16:43.994342       1 controller.go:711] "Syncing nftables rules"
	I1129 09:16:53.603086       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1129 09:16:53.603169       1 main.go:301] handling current node
	I1129 09:17:03.606542       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1129 09:17:03.606579       1 main.go:301] handling current node
	I1129 09:17:13.603593       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1129 09:17:13.603629       1 main.go:301] handling current node
	I1129 09:17:23.602989       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1129 09:17:23.603029       1 main.go:301] handling current node
	I1129 09:17:33.611958       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1129 09:17:33.611996       1 main.go:301] handling current node
	
	
	==> kube-apiserver [652695edd3b368ed64211f7ee974fad1ce2be0ae46ac90c153b50e751c36007b] <==
	I1129 09:16:42.357939       1 cache.go:39] Caches are synced for autoregister controller
	I1129 09:16:42.358666       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 09:16:42.360110       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 09:16:42.360308       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1129 09:16:42.360332       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1129 09:16:42.360249       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 09:16:42.361102       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 09:16:42.361037       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1129 09:16:42.368047       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 09:16:42.370195       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 09:16:42.380615       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 09:16:42.391048       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:16:42.697251       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:16:42.759482       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:16:42.814615       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:16:42.830813       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:16:42.841008       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:16:42.911936       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.18.197"}
	I1129 09:16:42.943592       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.254.239"}
	I1129 09:16:43.262199       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:16:45.678790       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:16:45.678864       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:16:46.029210       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:16:46.228905       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:16:46.228905       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [65cad02ba2a7910bcdcffb28c773b53da4d5023aecfd588deeacf22d8dca4a38] <==
	I1129 09:16:45.698398       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 09:16:45.701511       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1129 09:16:45.704979       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:16:45.706490       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:16:45.707554       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 09:16:45.710170       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 09:16:45.713981       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:16:45.717283       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 09:16:45.720726       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 09:16:45.720742       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:16:45.721905       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:16:45.721994       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 09:16:45.724488       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:16:45.724874       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:16:45.724898       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 09:16:45.725273       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 09:16:45.725425       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 09:16:45.726749       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:16:45.726871       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:16:45.726876       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:16:45.726972       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:16:45.726960       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-897274"
	I1129 09:16:45.727041       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1129 09:16:45.730859       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:16:45.735125       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [373fd7f555c013460e8c02caadd4d3bd9483657ac34e29d424536bbb510f2532] <==
	I1129 09:16:43.195861       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:16:43.268719       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:16:43.368975       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:16:43.369097       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1129 09:16:43.369271       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:16:43.392599       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:16:43.392677       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:16:43.399482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:16:43.399917       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:16:43.399941       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:16:43.401313       1 config.go:200] "Starting service config controller"
	I1129 09:16:43.401448       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:16:43.401455       1 config.go:309] "Starting node config controller"
	I1129 09:16:43.401468       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:16:43.401674       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:16:43.402394       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:16:43.401692       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:16:43.402421       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:16:43.501833       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:16:43.501871       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:16:43.503009       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:16:43.503014       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [aef2b06f8a8ac95a822e5865d9062a9500764f567fb042a1dbeda8630e6e5914] <==
	I1129 09:16:40.833131       1 serving.go:386] Generated self-signed cert in-memory
	I1129 09:16:42.331291       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 09:16:42.331325       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:16:42.339688       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1129 09:16:42.339719       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:16:42.339719       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:16:42.339743       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:16:42.339745       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:16:42.339730       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1129 09:16:42.340144       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:16:42.340596       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 09:16:42.439932       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:16:42.440121       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:16:42.440459       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 29 09:16:46 no-preload-897274 kubelet[723]: I1129 09:16:46.427422     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/66ea79bb-5692-472d-947c-7f67b687560c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-6fjrq\" (UID: \"66ea79bb-5692-472d-947c-7f67b687560c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6fjrq"
	Nov 29 09:16:46 no-preload-897274 kubelet[723]: I1129 09:16:46.427478     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v45c8\" (UniqueName: \"kubernetes.io/projected/66ea79bb-5692-472d-947c-7f67b687560c-kube-api-access-v45c8\") pod \"kubernetes-dashboard-855c9754f9-6fjrq\" (UID: \"66ea79bb-5692-472d-947c-7f67b687560c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6fjrq"
	Nov 29 09:16:46 no-preload-897274 kubelet[723]: I1129 09:16:46.427512     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/646dce54-6e5c-4117-ab60-2d56f76f16a1-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-5x998\" (UID: \"646dce54-6e5c-4117-ab60-2d56f76f16a1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998"
	Nov 29 09:16:46 no-preload-897274 kubelet[723]: I1129 09:16:46.427605     723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf4nh\" (UniqueName: \"kubernetes.io/projected/646dce54-6e5c-4117-ab60-2d56f76f16a1-kube-api-access-gf4nh\") pod \"dashboard-metrics-scraper-6ffb444bf9-5x998\" (UID: \"646dce54-6e5c-4117-ab60-2d56f76f16a1\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998"
	Nov 29 09:16:48 no-preload-897274 kubelet[723]: I1129 09:16:48.466529     723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 29 09:16:51 no-preload-897274 kubelet[723]: I1129 09:16:51.100551     723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6fjrq" podStartSLOduration=1.724284953 podStartE2EDuration="5.100523592s" podCreationTimestamp="2025-11-29 09:16:46 +0000 UTC" firstStartedPulling="2025-11-29 09:16:46.63075521 +0000 UTC m=+6.956028773" lastFinishedPulling="2025-11-29 09:16:50.006993851 +0000 UTC m=+10.332267412" observedRunningTime="2025-11-29 09:16:50.904256433 +0000 UTC m=+11.229530005" watchObservedRunningTime="2025-11-29 09:16:51.100523592 +0000 UTC m=+11.425797176"
	Nov 29 09:16:52 no-preload-897274 kubelet[723]: I1129 09:16:52.842732     723 scope.go:117] "RemoveContainer" containerID="669b31d641acb4f2d2c1bf42a4868c6e1bcfdf66c443ca1dafc4ef1fed587d25"
	Nov 29 09:16:53 no-preload-897274 kubelet[723]: I1129 09:16:53.846550     723 scope.go:117] "RemoveContainer" containerID="669b31d641acb4f2d2c1bf42a4868c6e1bcfdf66c443ca1dafc4ef1fed587d25"
	Nov 29 09:16:53 no-preload-897274 kubelet[723]: I1129 09:16:53.846709     723 scope.go:117] "RemoveContainer" containerID="0ddd695d6cbfcceee3a6c665c9274415590bfd179949ffd9e9b055d76bd9acc4"
	Nov 29 09:16:53 no-preload-897274 kubelet[723]: E1129 09:16:53.846913     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5x998_kubernetes-dashboard(646dce54-6e5c-4117-ab60-2d56f76f16a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998" podUID="646dce54-6e5c-4117-ab60-2d56f76f16a1"
	Nov 29 09:16:54 no-preload-897274 kubelet[723]: I1129 09:16:54.850893     723 scope.go:117] "RemoveContainer" containerID="0ddd695d6cbfcceee3a6c665c9274415590bfd179949ffd9e9b055d76bd9acc4"
	Nov 29 09:16:54 no-preload-897274 kubelet[723]: E1129 09:16:54.851060     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5x998_kubernetes-dashboard(646dce54-6e5c-4117-ab60-2d56f76f16a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998" podUID="646dce54-6e5c-4117-ab60-2d56f76f16a1"
	Nov 29 09:17:02 no-preload-897274 kubelet[723]: I1129 09:17:02.629129     723 scope.go:117] "RemoveContainer" containerID="0ddd695d6cbfcceee3a6c665c9274415590bfd179949ffd9e9b055d76bd9acc4"
	Nov 29 09:17:02 no-preload-897274 kubelet[723]: E1129 09:17:02.629386     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5x998_kubernetes-dashboard(646dce54-6e5c-4117-ab60-2d56f76f16a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998" podUID="646dce54-6e5c-4117-ab60-2d56f76f16a1"
	Nov 29 09:17:13 no-preload-897274 kubelet[723]: I1129 09:17:13.907219     723 scope.go:117] "RemoveContainer" containerID="1e940917edcba9ed0c9cecbd2f8b6f46be2ada309804967c806a5001be24dc45"
	Nov 29 09:17:15 no-preload-897274 kubelet[723]: I1129 09:17:15.769095     723 scope.go:117] "RemoveContainer" containerID="0ddd695d6cbfcceee3a6c665c9274415590bfd179949ffd9e9b055d76bd9acc4"
	Nov 29 09:17:15 no-preload-897274 kubelet[723]: I1129 09:17:15.917043     723 scope.go:117] "RemoveContainer" containerID="0ddd695d6cbfcceee3a6c665c9274415590bfd179949ffd9e9b055d76bd9acc4"
	Nov 29 09:17:15 no-preload-897274 kubelet[723]: I1129 09:17:15.917265     723 scope.go:117] "RemoveContainer" containerID="4bf1ca4b85763fb7cd9174bcda6d47ff02461f77f69bd536caf0e9695c5bd144"
	Nov 29 09:17:15 no-preload-897274 kubelet[723]: E1129 09:17:15.917490     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5x998_kubernetes-dashboard(646dce54-6e5c-4117-ab60-2d56f76f16a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998" podUID="646dce54-6e5c-4117-ab60-2d56f76f16a1"
	Nov 29 09:17:22 no-preload-897274 kubelet[723]: I1129 09:17:22.629485     723 scope.go:117] "RemoveContainer" containerID="4bf1ca4b85763fb7cd9174bcda6d47ff02461f77f69bd536caf0e9695c5bd144"
	Nov 29 09:17:22 no-preload-897274 kubelet[723]: E1129 09:17:22.629661     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5x998_kubernetes-dashboard(646dce54-6e5c-4117-ab60-2d56f76f16a1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5x998" podUID="646dce54-6e5c-4117-ab60-2d56f76f16a1"
	Nov 29 09:17:32 no-preload-897274 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 09:17:32 no-preload-897274 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 09:17:32 no-preload-897274 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 29 09:17:32 no-preload-897274 systemd[1]: kubelet.service: Consumed 1.759s CPU time.
	
	
	==> kubernetes-dashboard [9cb3694a5dbd1f8de0fd09777a72d591a3bc36f97de400cddbcf1adb6df108e7] <==
	2025/11/29 09:16:50 Starting overwatch
	2025/11/29 09:16:50 Using namespace: kubernetes-dashboard
	2025/11/29 09:16:50 Using in-cluster config to connect to apiserver
	2025/11/29 09:16:50 Using secret token for csrf signing
	2025/11/29 09:16:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 09:16:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 09:16:50 Successful initial request to the apiserver, version: v1.34.1
	2025/11/29 09:16:50 Generating JWE encryption key
	2025/11/29 09:16:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 09:16:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 09:16:50 Initializing JWE encryption key from synchronized object
	2025/11/29 09:16:50 Creating in-cluster Sidecar client
	2025/11/29 09:16:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 09:16:50 Serving insecurely on HTTP port: 9090
	2025/11/29 09:17:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1e940917edcba9ed0c9cecbd2f8b6f46be2ada309804967c806a5001be24dc45] <==
	I1129 09:16:43.166091       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 09:17:13.168032       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [59647e07ee8a091171cec7b590acbc5f09666c37222fbab34c9303475c8dd562] <==
	I1129 09:17:13.956368       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:17:13.964085       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:17:13.964137       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:17:13.966510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:17.422060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:21.682405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:25.280549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:28.334095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:31.356151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:31.415619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:17:31.415777       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:17:31.415930       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"effd3485-4df8-4871-84ed-37c153135089", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-897274_1199971e-b124-4727-bcfa-d8547fb697b4 became leader
	I1129 09:17:31.416005       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-897274_1199971e-b124-4727-bcfa-d8547fb697b4!
	W1129 09:17:31.418317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:31.434420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:17:31.516704       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-897274_1199971e-b124-4727-bcfa-d8547fb697b4!
	W1129 09:17:33.438502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:33.443811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:35.448575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:35.457453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:37.460807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:37.523981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-897274 -n no-preload-897274
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-897274 -n no-preload-897274: exit status 2 (398.451568ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-897274 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-632243 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-632243 --alsologtostderr -v=1: exit status 80 (1.900151545s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-632243 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:17:57.649615  348860 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:17:57.649726  348860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:57.649737  348860 out.go:374] Setting ErrFile to fd 2...
	I1129 09:17:57.649744  348860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:57.649930  348860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:17:57.650183  348860 out.go:368] Setting JSON to false
	I1129 09:17:57.650200  348860 mustload.go:66] Loading cluster: default-k8s-diff-port-632243
	I1129 09:17:57.650543  348860 config.go:182] Loaded profile config "default-k8s-diff-port-632243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:57.650932  348860 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-632243 --format={{.State.Status}}
	I1129 09:17:57.670420  348860 host.go:66] Checking if "default-k8s-diff-port-632243" exists ...
	I1129 09:17:57.670752  348860 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:57.734041  348860 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-29 09:17:57.722539271 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:57.734670  348860 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-632243 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1129 09:17:57.736294  348860 out.go:179] * Pausing node default-k8s-diff-port-632243 ... 
	I1129 09:17:57.737350  348860 host.go:66] Checking if "default-k8s-diff-port-632243" exists ...
	I1129 09:17:57.737620  348860 ssh_runner.go:195] Run: systemctl --version
	I1129 09:17:57.737678  348860 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-632243
	I1129 09:17:57.758936  348860 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/default-k8s-diff-port-632243/id_rsa Username:docker}
	I1129 09:17:57.860787  348860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:17:57.874153  348860 pause.go:52] kubelet running: true
	I1129 09:17:57.874243  348860 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:17:58.053690  348860 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:17:58.053783  348860 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:17:58.123760  348860 cri.go:89] found id: "a0a7d268774521215ec9a9e231f8cbff24ab751bb32017578f757b56b868382b"
	I1129 09:17:58.123786  348860 cri.go:89] found id: "f86d4e903149f37fd40a69cf9fdd0675519e20733587ef981faf69f6c60584c4"
	I1129 09:17:58.123791  348860 cri.go:89] found id: "dcd5e71f1547b1e671741b763fc8bcd6c37b199a7a47b50bded35e37d88f15e5"
	I1129 09:17:58.123795  348860 cri.go:89] found id: "6d9c6a1fe80d134c1649a8574ce4c7fed4aca61a3d0743bc1723d61b82585852"
	I1129 09:17:58.123799  348860 cri.go:89] found id: "92732529bb831b3e850239c923cd55b6ba3e6316b7e319567d1bb7ed6abde79e"
	I1129 09:17:58.123804  348860 cri.go:89] found id: "b13c8a23740acd98b7a6a7244c86241544729c4895bf870e9bb842604451a0f4"
	I1129 09:17:58.123808  348860 cri.go:89] found id: "2080eaa5b786c79ead07692c870ce9928ace57a47032f699d66882570b205513"
	I1129 09:17:58.123812  348860 cri.go:89] found id: "c75e80b4e2dbb59237ca7e83b6a87a80d377951cce4c561324de39b3ea24a433"
	I1129 09:17:58.123816  348860 cri.go:89] found id: "be8adeee9f904b03165bd07f7f9279fad60f6e70a12d988e651be3f8e0e5974c"
	I1129 09:17:58.123826  348860 cri.go:89] found id: "7ef057717f2cb9bfee390527d1229b602af731a3770cbb94529401105f0694e2"
	I1129 09:17:58.123831  348860 cri.go:89] found id: "4886082fa2dbfffcedb9ba51af5ccb52f1828c3b9f03f1a2f251ec784b244659"
	I1129 09:17:58.123835  348860 cri.go:89] found id: ""
	I1129 09:17:58.123896  348860 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:17:58.136633  348860 retry.go:31] will retry after 316.877626ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:58Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:17:58.454241  348860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:17:58.467637  348860 pause.go:52] kubelet running: false
	I1129 09:17:58.467732  348860 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:17:58.613860  348860 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:17:58.613986  348860 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:17:58.681574  348860 cri.go:89] found id: "a0a7d268774521215ec9a9e231f8cbff24ab751bb32017578f757b56b868382b"
	I1129 09:17:58.681600  348860 cri.go:89] found id: "f86d4e903149f37fd40a69cf9fdd0675519e20733587ef981faf69f6c60584c4"
	I1129 09:17:58.681607  348860 cri.go:89] found id: "dcd5e71f1547b1e671741b763fc8bcd6c37b199a7a47b50bded35e37d88f15e5"
	I1129 09:17:58.681614  348860 cri.go:89] found id: "6d9c6a1fe80d134c1649a8574ce4c7fed4aca61a3d0743bc1723d61b82585852"
	I1129 09:17:58.681626  348860 cri.go:89] found id: "92732529bb831b3e850239c923cd55b6ba3e6316b7e319567d1bb7ed6abde79e"
	I1129 09:17:58.681632  348860 cri.go:89] found id: "b13c8a23740acd98b7a6a7244c86241544729c4895bf870e9bb842604451a0f4"
	I1129 09:17:58.681640  348860 cri.go:89] found id: "2080eaa5b786c79ead07692c870ce9928ace57a47032f699d66882570b205513"
	I1129 09:17:58.681645  348860 cri.go:89] found id: "c75e80b4e2dbb59237ca7e83b6a87a80d377951cce4c561324de39b3ea24a433"
	I1129 09:17:58.681652  348860 cri.go:89] found id: "be8adeee9f904b03165bd07f7f9279fad60f6e70a12d988e651be3f8e0e5974c"
	I1129 09:17:58.681672  348860 cri.go:89] found id: "7ef057717f2cb9bfee390527d1229b602af731a3770cbb94529401105f0694e2"
	I1129 09:17:58.681680  348860 cri.go:89] found id: "4886082fa2dbfffcedb9ba51af5ccb52f1828c3b9f03f1a2f251ec784b244659"
	I1129 09:17:58.681683  348860 cri.go:89] found id: ""
	I1129 09:17:58.681722  348860 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:17:58.694318  348860 retry.go:31] will retry after 533.079738ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:58Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:17:59.228035  348860 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:17:59.241736  348860 pause.go:52] kubelet running: false
	I1129 09:17:59.241821  348860 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:17:59.392795  348860 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:17:59.392907  348860 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:17:59.466150  348860 cri.go:89] found id: "a0a7d268774521215ec9a9e231f8cbff24ab751bb32017578f757b56b868382b"
	I1129 09:17:59.466179  348860 cri.go:89] found id: "f86d4e903149f37fd40a69cf9fdd0675519e20733587ef981faf69f6c60584c4"
	I1129 09:17:59.466186  348860 cri.go:89] found id: "dcd5e71f1547b1e671741b763fc8bcd6c37b199a7a47b50bded35e37d88f15e5"
	I1129 09:17:59.466192  348860 cri.go:89] found id: "6d9c6a1fe80d134c1649a8574ce4c7fed4aca61a3d0743bc1723d61b82585852"
	I1129 09:17:59.466197  348860 cri.go:89] found id: "92732529bb831b3e850239c923cd55b6ba3e6316b7e319567d1bb7ed6abde79e"
	I1129 09:17:59.466202  348860 cri.go:89] found id: "b13c8a23740acd98b7a6a7244c86241544729c4895bf870e9bb842604451a0f4"
	I1129 09:17:59.466207  348860 cri.go:89] found id: "2080eaa5b786c79ead07692c870ce9928ace57a47032f699d66882570b205513"
	I1129 09:17:59.466212  348860 cri.go:89] found id: "c75e80b4e2dbb59237ca7e83b6a87a80d377951cce4c561324de39b3ea24a433"
	I1129 09:17:59.466216  348860 cri.go:89] found id: "be8adeee9f904b03165bd07f7f9279fad60f6e70a12d988e651be3f8e0e5974c"
	I1129 09:17:59.466240  348860 cri.go:89] found id: "7ef057717f2cb9bfee390527d1229b602af731a3770cbb94529401105f0694e2"
	I1129 09:17:59.466249  348860 cri.go:89] found id: "4886082fa2dbfffcedb9ba51af5ccb52f1828c3b9f03f1a2f251ec784b244659"
	I1129 09:17:59.466254  348860 cri.go:89] found id: ""
	I1129 09:17:59.466311  348860 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:17:59.479753  348860 out.go:203] 
	W1129 09:17:59.480874  348860 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:17:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:17:59.480900  348860 out.go:285] * 
	* 
	W1129 09:17:59.484979  348860 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:17:59.486181  348860 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-632243 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-632243
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-632243:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88",
	        "Created": "2025-11-29T09:16:00.909438015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 337290,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:17:02.786122524Z",
	            "FinishedAt": "2025-11-29T09:17:01.791647739Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88/hostname",
	        "HostsPath": "/var/lib/docker/containers/34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88/hosts",
	        "LogPath": "/var/lib/docker/containers/34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88/34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88-json.log",
	        "Name": "/default-k8s-diff-port-632243",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-632243:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-632243",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88",
	                "LowerDir": "/var/lib/docker/overlay2/7263fb3772af2f1b363fa16d989f215dd7f46480236fb7471fbfb55fcc94f1fb-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7263fb3772af2f1b363fa16d989f215dd7f46480236fb7471fbfb55fcc94f1fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7263fb3772af2f1b363fa16d989f215dd7f46480236fb7471fbfb55fcc94f1fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7263fb3772af2f1b363fa16d989f215dd7f46480236fb7471fbfb55fcc94f1fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-632243",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-632243/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-632243",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-632243",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-632243",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cf263793e0f0ee74ceb8db2201473b4f33060de791688a1ff4ee23ec22feed75",
	            "SandboxKey": "/var/run/docker/netns/cf263793e0f0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-632243": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a23ed3dab8d4d6fb6f9edc51b6864da564467aa8f10cf2599da81a3bf2593e1",
	                    "EndpointID": "c0a55e71f7c94755d0fc30b355b6da6e868cf8770d96b3927d4b974a9d3b98e6",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "6e:61:8e:1e:ad:34",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-632243",
	                        "34542347c69b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-632243 -n default-k8s-diff-port-632243
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-632243 -n default-k8s-diff-port-632243: exit status 2 (337.775253ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-632243 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-632243 logs -n 25: (1.184928617s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-680646 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-897274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p no-preload-897274 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-680646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-680646 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p no-preload-897274 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-160987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-632243 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-160987 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ stop    │ -p default-k8s-diff-port-632243 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-160987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-632243 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ old-k8s-version-680646 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-680646 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ no-preload-897274 image list --format=json                                                                                                                                                                                                    │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-897274 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-020433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p no-preload-897274                                                                                                                                                                                                                          │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ delete  │ -p no-preload-897274                                                                                                                                                                                                                          │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ default-k8s-diff-port-632243 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p default-k8s-diff-port-632243 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:17:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:17:32.750525  343912 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:17:32.750831  343912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:32.750854  343912 out.go:374] Setting ErrFile to fd 2...
	I1129 09:17:32.750859  343912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:32.751040  343912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:17:32.751569  343912 out.go:368] Setting JSON to false
	I1129 09:17:32.753086  343912 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3605,"bootTime":1764404248,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:17:32.753155  343912 start.go:143] virtualization: kvm guest
	I1129 09:17:32.755163  343912 out.go:179] * [newest-cni-020433] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:17:32.756656  343912 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:17:32.756692  343912 notify.go:221] Checking for updates...
	I1129 09:17:32.759425  343912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:17:32.760722  343912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:32.765362  343912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:17:32.766699  343912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:17:32.768011  343912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:17:32.769812  343912 config.go:182] Loaded profile config "default-k8s-diff-port-632243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.769952  343912 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.770081  343912 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.770208  343912 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:17:32.794655  343912 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:17:32.794775  343912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:32.856269  343912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 09:17:32.845151576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:32.856388  343912 docker.go:319] overlay module found
	I1129 09:17:32.858258  343912 out.go:179] * Using the docker driver based on user configuration
	I1129 09:17:32.859415  343912 start.go:309] selected driver: docker
	I1129 09:17:32.859434  343912 start.go:927] validating driver "docker" against <nil>
	I1129 09:17:32.859451  343912 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:17:32.860352  343912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:32.930751  343912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 09:17:32.91839311 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:32.930951  343912 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1129 09:17:32.930985  343912 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1129 09:17:32.931224  343912 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:17:32.933425  343912 out.go:179] * Using Docker driver with root privileges
	I1129 09:17:32.934824  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:32.934925  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:32.934944  343912 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:17:32.935044  343912 start.go:353] cluster config:
	{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:32.936354  343912 out.go:179] * Starting "newest-cni-020433" primary control-plane node in "newest-cni-020433" cluster
	I1129 09:17:32.937514  343912 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:17:32.938803  343912 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:17:32.940016  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:32.940051  343912 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:17:32.940062  343912 cache.go:65] Caching tarball of preloaded images
	I1129 09:17:32.940107  343912 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:17:32.940163  343912 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:17:32.940176  343912 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:17:32.940278  343912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:17:32.940301  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json: {Name:mk7d4da653b0e884b27837053cd3d354c3ff76e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:32.963727  343912 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:17:32.963754  343912 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:17:32.963777  343912 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:17:32.963830  343912 start.go:360] acquireMachinesLock for newest-cni-020433: {Name:mk6347901682a01c9d317c6a402722ce1e16792e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:17:32.963998  343912 start.go:364] duration metric: took 95.455µs to acquireMachinesLock for "newest-cni-020433"
	I1129 09:17:32.964029  343912 start.go:93] Provisioning new machine with config: &{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:17:32.964128  343912 start.go:125] createHost starting for "" (driver="docker")
	W1129 09:17:33.828970  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:35.829789  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:33.948277  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:36.448064  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	I1129 09:17:32.965989  343912 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:17:32.966316  343912 start.go:159] libmachine.API.Create for "newest-cni-020433" (driver="docker")
	I1129 09:17:32.966356  343912 client.go:173] LocalClient.Create starting
	I1129 09:17:32.966470  343912 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem
	I1129 09:17:32.966524  343912 main.go:143] libmachine: Decoding PEM data...
	I1129 09:17:32.966555  343912 main.go:143] libmachine: Parsing certificate...
	I1129 09:17:32.966626  343912 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem
	I1129 09:17:32.966654  343912 main.go:143] libmachine: Decoding PEM data...
	I1129 09:17:32.966670  343912 main.go:143] libmachine: Parsing certificate...
	I1129 09:17:32.967123  343912 cli_runner.go:164] Run: docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:17:32.987734  343912 cli_runner.go:211] docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:17:32.987872  343912 network_create.go:284] running [docker network inspect newest-cni-020433] to gather additional debugging logs...
	I1129 09:17:32.987905  343912 cli_runner.go:164] Run: docker network inspect newest-cni-020433
	W1129 09:17:33.007164  343912 cli_runner.go:211] docker network inspect newest-cni-020433 returned with exit code 1
	I1129 09:17:33.007194  343912 network_create.go:287] error running [docker network inspect newest-cni-020433]: docker network inspect newest-cni-020433: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-020433 not found
	I1129 09:17:33.007209  343912 network_create.go:289] output of [docker network inspect newest-cni-020433]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-020433 not found
	
	** /stderr **
	I1129 09:17:33.007343  343912 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:17:33.027663  343912 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-94fc752bc7a7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ed:43:e0:ad:5a} reservation:<nil>}
	I1129 09:17:33.028420  343912 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4cfc302f5d5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:73:ac:ba:18:bb} reservation:<nil>}
	I1129 09:17:33.029339  343912 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-05a73bbe16b8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:a9:af:00:78:ac} reservation:<nil>}
	I1129 09:17:33.030217  343912 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb6cd0}
	I1129 09:17:33.030243  343912 network_create.go:124] attempt to create docker network newest-cni-020433 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1129 09:17:33.030303  343912 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-020433 newest-cni-020433
	I1129 09:17:33.088543  343912 network_create.go:108] docker network newest-cni-020433 192.168.76.0/24 created
	I1129 09:17:33.088582  343912 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-020433" container
	I1129 09:17:33.088651  343912 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:17:33.110031  343912 cli_runner.go:164] Run: docker volume create newest-cni-020433 --label name.minikube.sigs.k8s.io=newest-cni-020433 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:17:33.131986  343912 oci.go:103] Successfully created a docker volume newest-cni-020433
	I1129 09:17:33.132086  343912 cli_runner.go:164] Run: docker run --rm --name newest-cni-020433-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-020433 --entrypoint /usr/bin/test -v newest-cni-020433:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:17:33.542784  343912 oci.go:107] Successfully prepared a docker volume newest-cni-020433
	I1129 09:17:33.542890  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:33.542904  343912 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:17:33.542963  343912 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-020433:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1129 09:17:38.328506  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:40.827427  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:38.452229  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:40.947913  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	I1129 09:17:38.398985  343912 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-020433:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.855972089s)
	I1129 09:17:38.399017  343912 kic.go:203] duration metric: took 4.856111068s to extract preloaded images to volume ...
	W1129 09:17:38.399145  343912 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:17:38.399190  343912 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:17:38.399238  343912 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:17:38.467132  343912 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-020433 --name newest-cni-020433 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-020433 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-020433 --network newest-cni-020433 --ip 192.168.76.2 --volume newest-cni-020433:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:17:39.064807  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Running}}
	I1129 09:17:39.085951  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:17:39.108652  343912 cli_runner.go:164] Run: docker exec newest-cni-020433 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:17:39.159933  343912 oci.go:144] the created container "newest-cni-020433" has a running status.
	I1129 09:17:39.159970  343912 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa...
	I1129 09:17:39.228797  343912 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:17:39.262675  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:17:39.285576  343912 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:17:39.285600  343912 kic_runner.go:114] Args: [docker exec --privileged newest-cni-020433 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:17:39.349410  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:17:39.369689  343912 machine.go:94] provisionDockerMachine start ...
	I1129 09:17:39.369803  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:39.396522  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:39.396932  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:39.396965  343912 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:17:39.397982  343912 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 09:17:42.550448  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-020433
	
	I1129 09:17:42.550474  343912 ubuntu.go:182] provisioning hostname "newest-cni-020433"
	I1129 09:17:42.550527  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:42.572133  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:42.572440  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:42.572461  343912 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-020433 && echo "newest-cni-020433" | sudo tee /etc/hostname
	I1129 09:17:42.733805  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-020433
	
	I1129 09:17:42.733897  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:42.754783  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:42.755144  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:42.755173  343912 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-020433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-020433/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-020433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:17:42.901064  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:17:42.901098  343912 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:17:42.901148  343912 ubuntu.go:190] setting up certificates
	I1129 09:17:42.901161  343912 provision.go:84] configureAuth start
	I1129 09:17:42.901231  343912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:17:42.921161  343912 provision.go:143] copyHostCerts
	I1129 09:17:42.921240  343912 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:17:42.921253  343912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:17:42.921344  343912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:17:42.921497  343912 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:17:42.921509  343912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:17:42.921568  343912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:17:42.921658  343912 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:17:42.921666  343912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:17:42.921693  343912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:17:42.921761  343912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.newest-cni-020433 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-020433]
	I1129 09:17:43.032466  343912 provision.go:177] copyRemoteCerts
	I1129 09:17:43.032525  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:17:43.032558  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.052823  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.158233  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:17:43.179138  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:17:43.198311  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:17:43.217652  343912 provision.go:87] duration metric: took 316.475572ms to configureAuth
	I1129 09:17:43.217682  343912 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:17:43.217917  343912 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:43.218034  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.237980  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:43.238211  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:43.238225  343912 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:17:43.535016  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:17:43.535041  343912 machine.go:97] duration metric: took 4.165320057s to provisionDockerMachine
	I1129 09:17:43.535052  343912 client.go:176] duration metric: took 10.568687757s to LocalClient.Create
	I1129 09:17:43.535073  343912 start.go:167] duration metric: took 10.568756916s to libmachine.API.Create "newest-cni-020433"
	I1129 09:17:43.535083  343912 start.go:293] postStartSetup for "newest-cni-020433" (driver="docker")
	I1129 09:17:43.535095  343912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:17:43.535160  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:17:43.535203  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.554574  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.661234  343912 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:17:43.665051  343912 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:17:43.665086  343912 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:17:43.665114  343912 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:17:43.665186  343912 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:17:43.665301  343912 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:17:43.665409  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:17:43.674165  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:43.696383  343912 start.go:296] duration metric: took 161.286243ms for postStartSetup
	I1129 09:17:43.696751  343912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:17:43.716301  343912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:17:43.716589  343912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:17:43.716640  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.735518  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.835307  343912 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:17:43.840211  343912 start.go:128] duration metric: took 10.876067654s to createHost
	I1129 09:17:43.840237  343912 start.go:83] releasing machines lock for "newest-cni-020433", held for 10.876224942s
	I1129 09:17:43.840309  343912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:17:43.860942  343912 ssh_runner.go:195] Run: cat /version.json
	I1129 09:17:43.860995  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.861019  343912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:17:43.861110  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.881396  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.881825  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:44.035348  343912 ssh_runner.go:195] Run: systemctl --version
	I1129 09:17:44.042398  343912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:17:44.079667  343912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:17:44.084668  343912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:17:44.084747  343912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:17:44.112611  343912 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:17:44.112638  343912 start.go:496] detecting cgroup driver to use...
	I1129 09:17:44.112675  343912 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:17:44.112721  343912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:17:44.130191  343912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:17:44.143333  343912 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:17:44.143407  343912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:17:44.160522  343912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:17:44.179005  343912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:17:44.264507  343912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:17:44.361596  343912 docker.go:234] disabling docker service ...
	I1129 09:17:44.361665  343912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:17:44.385098  343912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:17:44.399261  343912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:17:44.490353  343912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:17:44.577339  343912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:17:44.590606  343912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:17:44.606040  343912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:17:44.606113  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.617850  343912 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:17:44.617930  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.627795  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.637388  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.647881  343912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:17:44.657593  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.667667  343912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.683312  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.693180  343912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:17:44.701299  343912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:17:44.709519  343912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:44.789707  343912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:17:44.946719  343912 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:17:44.946786  343912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:17:44.950988  343912 start.go:564] Will wait 60s for crictl version
	I1129 09:17:44.951061  343912 ssh_runner.go:195] Run: which crictl
	I1129 09:17:44.954897  343912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:17:44.981273  343912 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:17:44.981355  343912 ssh_runner.go:195] Run: crio --version
	I1129 09:17:45.010241  343912 ssh_runner.go:195] Run: crio --version
	I1129 09:17:45.041932  343912 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:17:45.043598  343912 cli_runner.go:164] Run: docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:17:45.064493  343912 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 09:17:45.068916  343912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:45.081636  343912 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1129 09:17:43.447332  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	I1129 09:17:44.449613  336858 pod_ready.go:94] pod "coredns-66bc5c9577-z4m7c" is "Ready"
	I1129 09:17:44.449647  336858 pod_ready.go:86] duration metric: took 31.007906695s for pod "coredns-66bc5c9577-z4m7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.452244  336858 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.456751  336858 pod_ready.go:94] pod "etcd-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:44.456779  336858 pod_ready.go:86] duration metric: took 4.509231ms for pod "etcd-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.458972  336858 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.464014  336858 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:44.464045  336858 pod_ready.go:86] duration metric: took 5.045626ms for pod "kube-apiserver-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.466444  336858 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.645988  336858 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:44.646021  336858 pod_ready.go:86] duration metric: took 179.551463ms for pod "kube-controller-manager-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.845460  336858 pod_ready.go:83] waiting for pod "kube-proxy-p2nf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.245518  336858 pod_ready.go:94] pod "kube-proxy-p2nf7" is "Ready"
	I1129 09:17:45.245548  336858 pod_ready.go:86] duration metric: took 400.053767ms for pod "kube-proxy-p2nf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.445969  336858 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.847024  336858 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:45.847054  336858 pod_ready.go:86] duration metric: took 401.056115ms for pod "kube-scheduler-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.847067  336858 pod_ready.go:40] duration metric: took 32.409409019s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:45.894722  336858 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:17:45.896514  336858 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-632243" cluster and "default" namespace by default
	W1129 09:17:42.828310  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:44.828378  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	I1129 09:17:45.082734  343912 kubeadm.go:884] updating cluster {Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:17:45.082902  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:45.082966  343912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:45.116711  343912 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:45.116737  343912 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:17:45.116794  343912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:45.143455  343912 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:45.143477  343912 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:17:45.143484  343912 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 09:17:45.143562  343912 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-020433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:17:45.143624  343912 ssh_runner.go:195] Run: crio config
	I1129 09:17:45.191199  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:45.191226  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:45.191244  343912 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1129 09:17:45.191264  343912 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-020433 NodeName:newest-cni-020433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:17:45.191372  343912 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-020433"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:17:45.191438  343912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:17:45.199969  343912 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:17:45.200043  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:17:45.208777  343912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 09:17:45.222978  343912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:17:45.238915  343912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1129 09:17:45.253505  343912 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:17:45.257546  343912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:45.269034  343912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:45.354518  343912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:17:45.382355  343912 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433 for IP: 192.168.76.2
	I1129 09:17:45.382379  343912 certs.go:195] generating shared ca certs ...
	I1129 09:17:45.382407  343912 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.382577  343912 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:17:45.382636  343912 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:17:45.382650  343912 certs.go:257] generating profile certs ...
	I1129 09:17:45.382718  343912 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key
	I1129 09:17:45.382739  343912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.crt with IP's: []
	I1129 09:17:45.531926  343912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.crt ...
	I1129 09:17:45.531957  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.crt: {Name:mkeb17feaf8ba6750a01bd0a1f0441d4154bc65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.532140  343912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key ...
	I1129 09:17:45.532151  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key: {Name:mke1454a7dc3fbfdd29bdb836050690bcbb7394e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.532230  343912 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70
	I1129 09:17:45.532247  343912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1129 09:17:45.624876  343912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70 ...
	I1129 09:17:45.624908  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70: {Name:mk7ef25787741e084b6a866e43c94e1e8fef637a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.625077  343912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70 ...
	I1129 09:17:45.625090  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70: {Name:mk1ecd69640eeb4a11bb5f1e1ff7ab99459cb558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.625222  343912 certs.go:382] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70 -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt
	I1129 09:17:45.625303  343912 certs.go:386] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70 -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key
	I1129 09:17:45.625381  343912 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key
	I1129 09:17:45.625401  343912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt with IP's: []
	I1129 09:17:45.648826  343912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt ...
	I1129 09:17:45.648864  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt: {Name:mk66c6222d92d3d2bb033717f49fc6858d0a9367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.649040  343912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key ...
	I1129 09:17:45.649052  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key: {Name:mk559719a3cba034552025e578cadb28054704f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.649223  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:17:45.649259  343912 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:17:45.649269  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:17:45.649291  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:17:45.649314  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:17:45.649337  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:17:45.649376  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:45.649920  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:17:45.669435  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:17:45.688777  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:17:45.707612  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:17:45.726954  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:17:45.745570  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 09:17:45.763773  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:17:45.781717  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:17:45.799936  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:17:45.820108  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:17:45.839214  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:17:45.859643  343912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:17:45.874007  343912 ssh_runner.go:195] Run: openssl version
	I1129 09:17:45.880775  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:17:45.890438  343912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:17:45.894494  343912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:17:45.894554  343912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:17:45.934499  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:17:45.944013  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:17:45.953676  343912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:17:45.957999  343912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:17:45.958047  343912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:17:45.998219  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:17:46.008105  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:17:46.018512  343912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:46.022778  343912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:46.022855  343912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:46.060278  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:17:46.069685  343912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:17:46.073627  343912 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:17:46.073677  343912 kubeadm.go:401] StartCluster: {Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:46.073751  343912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:17:46.073796  343912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:17:46.102729  343912 cri.go:89] found id: ""
	I1129 09:17:46.102806  343912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:17:46.111499  343912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:17:46.120045  343912 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:17:46.120110  343912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:17:46.128326  343912 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:17:46.128366  343912 kubeadm.go:158] found existing configuration files:
	
	I1129 09:17:46.128413  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:17:46.136677  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:17:46.136741  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:17:46.144727  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:17:46.152908  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:17:46.152971  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:17:46.161300  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:17:46.170050  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:17:46.170117  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:17:46.179094  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:17:46.190258  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:17:46.190325  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:17:46.200333  343912 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:17:46.284775  343912 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 09:17:46.350549  343912 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1129 09:17:47.327775  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:49.327943  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	I1129 09:17:49.827724  336547 pod_ready.go:94] pod "coredns-66bc5c9577-ptx67" is "Ready"
	I1129 09:17:49.827757  336547 pod_ready.go:86] duration metric: took 36.505830154s for pod "coredns-66bc5c9577-ptx67" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.830193  336547 pod_ready.go:83] waiting for pod "etcd-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.834087  336547 pod_ready.go:94] pod "etcd-embed-certs-160987" is "Ready"
	I1129 09:17:49.834117  336547 pod_ready.go:86] duration metric: took 3.892584ms for pod "etcd-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.836236  336547 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.840124  336547 pod_ready.go:94] pod "kube-apiserver-embed-certs-160987" is "Ready"
	I1129 09:17:49.840148  336547 pod_ready.go:86] duration metric: took 3.889352ms for pod "kube-apiserver-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.842042  336547 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.026423  336547 pod_ready.go:94] pod "kube-controller-manager-embed-certs-160987" is "Ready"
	I1129 09:17:50.026453  336547 pod_ready.go:86] duration metric: took 184.390727ms for pod "kube-controller-manager-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.225618  336547 pod_ready.go:83] waiting for pod "kube-proxy-57l9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.626123  336547 pod_ready.go:94] pod "kube-proxy-57l9h" is "Ready"
	I1129 09:17:50.626149  336547 pod_ready.go:86] duration metric: took 400.500945ms for pod "kube-proxy-57l9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.826449  336547 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:51.226295  336547 pod_ready.go:94] pod "kube-scheduler-embed-certs-160987" is "Ready"
	I1129 09:17:51.226329  336547 pod_ready.go:86] duration metric: took 399.854281ms for pod "kube-scheduler-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:51.226346  336547 pod_ready.go:40] duration metric: took 37.909395781s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:51.285055  336547 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:17:51.286778  336547 out.go:179] * Done! kubectl is now configured to use "embed-certs-160987" cluster and "default" namespace by default
	I1129 09:17:56.491067  343912 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:17:56.491128  343912 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:17:56.491204  343912 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:17:56.491252  343912 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:17:56.491321  343912 kubeadm.go:319] OS: Linux
	I1129 09:17:56.491400  343912 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:17:56.491441  343912 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:17:56.491502  343912 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:17:56.491558  343912 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:17:56.491602  343912 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:17:56.491642  343912 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:17:56.491683  343912 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:17:56.491733  343912 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:17:56.491834  343912 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:17:56.491984  343912 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:17:56.492110  343912 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:17:56.492184  343912 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:17:56.493947  343912 out.go:252]   - Generating certificates and keys ...
	I1129 09:17:56.494037  343912 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:17:56.494134  343912 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:17:56.494235  343912 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:17:56.494315  343912 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:17:56.494392  343912 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:17:56.494466  343912 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:17:56.494546  343912 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:17:56.494718  343912 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-020433] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:17:56.494781  343912 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:17:56.494923  343912 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-020433] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:17:56.495006  343912 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:17:56.495078  343912 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:17:56.495157  343912 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:17:56.495234  343912 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:17:56.495280  343912 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:17:56.495370  343912 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:17:56.495457  343912 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:17:56.495570  343912 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:17:56.495624  343912 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:17:56.495696  343912 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:17:56.495760  343912 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:17:56.497322  343912 out.go:252]   - Booting up control plane ...
	I1129 09:17:56.497460  343912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:17:56.497563  343912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:17:56.497652  343912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:17:56.497741  343912 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:17:56.497818  343912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:17:56.497976  343912 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:17:56.498111  343912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:17:56.498169  343912 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:17:56.498335  343912 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:17:56.498461  343912 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:17:56.498530  343912 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.935954ms
	I1129 09:17:56.498616  343912 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:17:56.498731  343912 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1129 09:17:56.498879  343912 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:17:56.498988  343912 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 09:17:56.499073  343912 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.504475511s
	I1129 09:17:56.499172  343912 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.695464789s
	I1129 09:17:56.499266  343912 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501879872s
	I1129 09:17:56.499440  343912 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:17:56.499624  343912 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:17:56.499691  343912 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:17:56.500020  343912 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-020433 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:17:56.500135  343912 kubeadm.go:319] [bootstrap-token] Using token: f82gs2.l4bciq1r030lvxp0
	I1129 09:17:56.501325  343912 out.go:252]   - Configuring RBAC rules ...
	I1129 09:17:56.501453  343912 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:17:56.501553  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:17:56.501684  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:17:56.501866  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:17:56.502025  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:17:56.502108  343912 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:17:56.502227  343912 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:17:56.502273  343912 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:17:56.502315  343912 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:17:56.502321  343912 kubeadm.go:319] 
	I1129 09:17:56.502376  343912 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:17:56.502381  343912 kubeadm.go:319] 
	I1129 09:17:56.502451  343912 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:17:56.502460  343912 kubeadm.go:319] 
	I1129 09:17:56.502481  343912 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:17:56.502532  343912 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:17:56.502576  343912 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:17:56.502586  343912 kubeadm.go:319] 
	I1129 09:17:56.502629  343912 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:17:56.502639  343912 kubeadm.go:319] 
	I1129 09:17:56.502689  343912 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:17:56.502697  343912 kubeadm.go:319] 
	I1129 09:17:56.502745  343912 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:17:56.502810  343912 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:17:56.502890  343912 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:17:56.502897  343912 kubeadm.go:319] 
	I1129 09:17:56.502971  343912 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:17:56.503057  343912 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:17:56.503064  343912 kubeadm.go:319] 
	I1129 09:17:56.503140  343912 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f82gs2.l4bciq1r030lvxp0 \
	I1129 09:17:56.503224  343912 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 \
	I1129 09:17:56.503244  343912 kubeadm.go:319] 	--control-plane 
	I1129 09:17:56.503252  343912 kubeadm.go:319] 
	I1129 09:17:56.503335  343912 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:17:56.503344  343912 kubeadm.go:319] 
	I1129 09:17:56.503417  343912 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f82gs2.l4bciq1r030lvxp0 \
	I1129 09:17:56.503523  343912 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 
	I1129 09:17:56.503547  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:56.503557  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:56.504793  343912 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:17:56.505922  343912 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:17:56.510364  343912 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 09:17:56.510383  343912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:17:56.523891  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:17:56.771723  343912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:17:56.771759  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:56.771857  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-020433 minikube.k8s.io/updated_at=2025_11_29T09_17_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=newest-cni-020433 minikube.k8s.io/primary=true
	I1129 09:17:56.870386  343912 ops.go:34] apiserver oom_adj: -16
	I1129 09:17:56.870493  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:57.370894  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Nov 29 09:17:23 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:23.360534378Z" level=info msg="Started container" PID=1739 containerID=d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj/dashboard-metrics-scraper id=a648d8c3-6665-44a7-a8df-48663fdb5dae name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bdafc0c56077e5d4103e283843edd3c82a08d58226b0e90b75aa0f1513d6e69
	Nov 29 09:17:24 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:24.31587653Z" level=info msg="Removing container: 7ae6aa913848d1746e5e27c0db29a6cfcaac2bc794cb23b355bc43b3a082682f" id=001b74c8-6edb-4b3b-af14-3d74c43e32ac name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:24 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:24.326027348Z" level=info msg="Removed container 7ae6aa913848d1746e5e27c0db29a6cfcaac2bc794cb23b355bc43b3a082682f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj/dashboard-metrics-scraper" id=001b74c8-6edb-4b3b-af14-3d74c43e32ac name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.229481517Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0f293ccd-59e2-4bf1-857b-b70db019304a name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.230606897Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0c11792f-e12a-4f5f-9e37-632d5b3cbb82 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.231683021Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj/dashboard-metrics-scraper" id=301e9fb5-ac8d-4bbe-80ab-40ba677962e0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.231834257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.238198124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.23881064Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.272580682Z" level=info msg="Created container 7ef057717f2cb9bfee390527d1229b602af731a3770cbb94529401105f0694e2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj/dashboard-metrics-scraper" id=301e9fb5-ac8d-4bbe-80ab-40ba677962e0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.273288929Z" level=info msg="Starting container: 7ef057717f2cb9bfee390527d1229b602af731a3770cbb94529401105f0694e2" id=55d633d1-bacf-4a2a-bd40-bc384f669c82 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.275555488Z" level=info msg="Started container" PID=1749 containerID=7ef057717f2cb9bfee390527d1229b602af731a3770cbb94529401105f0694e2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj/dashboard-metrics-scraper id=55d633d1-bacf-4a2a-bd40-bc384f669c82 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bdafc0c56077e5d4103e283843edd3c82a08d58226b0e90b75aa0f1513d6e69
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.362960577Z" level=info msg="Removing container: d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d" id=702dfba5-eb5b-442d-b344-027e77d7d69b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.374244236Z" level=info msg="Removed container d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj/dashboard-metrics-scraper" id=702dfba5-eb5b-442d-b344-027e77d7d69b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.370265111Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a63f97f8-27bc-444b-b6b8-68c94af10b16 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.371226685Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e4bc7ef7-02ae-47c7-9086-7553a76a670b name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.372337959Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=0593753c-aa8c-43e1-8210-ede3ebd38be5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.372475112Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.37672276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.376878Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/93d69e0b9f9d66df0ad841ff2a79cd42135575dd212e55e93fbdaf985a152e92/merged/etc/passwd: no such file or directory"
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.376896978Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/93d69e0b9f9d66df0ad841ff2a79cd42135575dd212e55e93fbdaf985a152e92/merged/etc/group: no such file or directory"
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.37714023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.405930735Z" level=info msg="Created container a0a7d268774521215ec9a9e231f8cbff24ab751bb32017578f757b56b868382b: kube-system/storage-provisioner/storage-provisioner" id=0593753c-aa8c-43e1-8210-ede3ebd38be5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.406621918Z" level=info msg="Starting container: a0a7d268774521215ec9a9e231f8cbff24ab751bb32017578f757b56b868382b" id=dadb3733-9816-4dff-8f94-4a61c8f06dd1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.408746956Z" level=info msg="Started container" PID=1763 containerID=a0a7d268774521215ec9a9e231f8cbff24ab751bb32017578f757b56b868382b description=kube-system/storage-provisioner/storage-provisioner id=dadb3733-9816-4dff-8f94-4a61c8f06dd1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cffef6829c93519395a268b4f59ce4681b356db4fddd42df826f27db934709a4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	a0a7d26877452       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   cffef6829c935       storage-provisioner                                    kube-system
	7ef057717f2cb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   1bdafc0c56077       dashboard-metrics-scraper-6ffb444bf9-4hblj             kubernetes-dashboard
	4886082fa2dbf       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   a3762881ec276       kubernetes-dashboard-855c9754f9-26vff                  kubernetes-dashboard
	f86d4e903149f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           47 seconds ago      Running             coredns                     0                   e1de2ba52ea07       coredns-66bc5c9577-z4m7c                               kube-system
	3d542c139be2a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           47 seconds ago      Running             busybox                     1                   a18a153401910       busybox                                                default
	dcd5e71f1547b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Exited              storage-provisioner         0                   cffef6829c935       storage-provisioner                                    kube-system
	6d9c6a1fe80d1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           47 seconds ago      Running             kube-proxy                  0                   c6663de35bbfd       kube-proxy-p2nf7                                       kube-system
	92732529bb831       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           47 seconds ago      Running             kindnet-cni                 0                   9c434856825f3       kindnet-tpstm                                          kube-system
	b13c8a23740ac       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           50 seconds ago      Running             kube-apiserver              0                   8552a74ed5f1a       kube-apiserver-default-k8s-diff-port-632243            kube-system
	2080eaa5b786c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           50 seconds ago      Running             kube-scheduler              0                   90e78a2184480       kube-scheduler-default-k8s-diff-port-632243            kube-system
	c75e80b4e2dbb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           50 seconds ago      Running             etcd                        0                   20a5aaa0b97bc       etcd-default-k8s-diff-port-632243                      kube-system
	be8adeee9f904       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           50 seconds ago      Running             kube-controller-manager     0                   addd317cf1d68       kube-controller-manager-default-k8s-diff-port-632243   kube-system
	
	
	==> coredns [f86d4e903149f37fd40a69cf9fdd0675519e20733587ef981faf69f6c60584c4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45112 - 19302 "HINFO IN 5917627528214500071.6117096667480374381. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014694143s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-632243
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-632243
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=default-k8s-diff-port-632243
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_16_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:16:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-632243
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:17:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-632243
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                bcdb7d0a-1357-4cf0-985d-43631a533a4d
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-66bc5c9577-z4m7c                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     100s
	  kube-system                 etcd-default-k8s-diff-port-632243                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         105s
	  kube-system                 kindnet-tpstm                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      100s
	  kube-system                 kube-apiserver-default-k8s-diff-port-632243             250m (3%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-632243    200m (2%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-proxy-p2nf7                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-scheduler-default-k8s-diff-port-632243             100m (1%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4hblj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-26vff                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 99s                kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  NodeHasSufficientMemory  105s               kubelet          Node default-k8s-diff-port-632243 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s               kubelet          Node default-k8s-diff-port-632243 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s               kubelet          Node default-k8s-diff-port-632243 status is now: NodeHasSufficientPID
	  Normal  Starting                 105s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           101s               node-controller  Node default-k8s-diff-port-632243 event: Registered Node default-k8s-diff-port-632243 in Controller
	  Normal  NodeReady                89s                kubelet          Node default-k8s-diff-port-632243 status is now: NodeReady
	  Normal  Starting                 51s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s (x8 over 51s)  kubelet          Node default-k8s-diff-port-632243 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 51s)  kubelet          Node default-k8s-diff-port-632243 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x8 over 51s)  kubelet          Node default-k8s-diff-port-632243 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node default-k8s-diff-port-632243 event: Registered Node default-k8s-diff-port-632243 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 e1 87 e4 84 ae 08 06
	[  +0.002640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[ +14.778105] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +13.520989] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 32 aa eb 52 77 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +14.762906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df ae 93 da f6 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[Nov29 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[ +13.297626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	[  +7.390906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 68 f3 3d 3c e9 08 06
	[  +0.000436] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[  +7.145922] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea c3 2f 2a a3 01 08 06
	[  +0.000415] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	
	
	==> etcd [c75e80b4e2dbb59237ca7e83b6a87a80d377951cce4c561324de39b3ea24a433] <==
	{"level":"warn","ts":"2025-11-29T09:17:11.087134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.100034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.110759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.123176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.132623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.138779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.151802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.158980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.173342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.185151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.194495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.203750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.212826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.222578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.231761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.244597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.255325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.266536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.274448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.285250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.331453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.336433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.344618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.353409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.404797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34660","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:00 up  1:00,  0 user,  load average: 3.06, 3.68, 2.49
	Linux default-k8s-diff-port-632243 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [92732529bb831b3e850239c923cd55b6ba3e6316b7e319567d1bb7ed6abde79e] <==
	I1129 09:17:12.887630       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:17:12.887916       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1129 09:17:12.888138       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:17:12.888159       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:17:12.888190       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:17:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:17:13.182668       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:17:13.182725       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:17:13.182739       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:17:13.283201       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:17:13.682886       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:17:13.682924       1 metrics.go:72] Registering metrics
	I1129 09:17:13.682989       1 controller.go:711] "Syncing nftables rules"
	I1129 09:17:23.088600       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:17:23.088673       1 main.go:301] handling current node
	I1129 09:17:33.093051       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:17:33.093096       1 main.go:301] handling current node
	I1129 09:17:43.089426       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:17:43.089466       1 main.go:301] handling current node
	I1129 09:17:53.091977       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:17:53.092031       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b13c8a23740acd98b7a6a7244c86241544729c4895bf870e9bb842604451a0f4] <==
	I1129 09:17:12.209645       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 09:17:12.209730       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 09:17:12.216949       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1129 09:17:12.217055       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 09:17:12.217105       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:17:12.234219       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1129 09:17:12.236485       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 09:17:12.252755       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1129 09:17:12.252908       1 policy_source.go:240] refreshing policies
	I1129 09:17:12.262702       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 09:17:12.263047       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 09:17:12.263112       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 09:17:12.275794       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:17:12.291551       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:17:12.332786       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:17:12.634871       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:17:12.688969       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:17:12.721876       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:17:12.732808       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:17:12.798653       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.24.253"}
	I1129 09:17:12.813777       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.88.109"}
	I1129 09:17:13.066271       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:17:15.570240       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:17:15.920986       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:17:16.172232       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [be8adeee9f904b03165bd07f7f9279fad60f6e70a12d988e651be3f8e0e5974c] <==
	I1129 09:17:15.544486       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:17:15.547299       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 09:17:15.564865       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 09:17:15.564876       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 09:17:15.565214       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 09:17:15.565453       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:17:15.565482       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 09:17:15.565504       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 09:17:15.565920       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 09:17:15.567168       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:17:15.567679       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:17:15.572988       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:17:15.573014       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:17:15.573027       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 09:17:15.581905       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 09:17:15.584264       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 09:17:15.584265       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:17:15.585388       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:17:15.585432       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:17:15.586684       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 09:17:15.588897       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:17:15.591163       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:17:15.593499       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:17:15.605926       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:17:15.605992       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	
	
	==> kube-proxy [6d9c6a1fe80d134c1649a8574ce4c7fed4aca61a3d0743bc1723d61b82585852] <==
	I1129 09:17:12.681707       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:17:12.753902       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:17:12.855529       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:17:12.856908       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1129 09:17:12.857196       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:17:12.885798       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:17:12.885875       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:17:12.893207       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:17:12.893654       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:17:12.893725       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:17:12.895394       1 config.go:200] "Starting service config controller"
	I1129 09:17:12.895424       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:17:12.895265       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:17:12.895443       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:17:12.895715       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:17:12.895911       1 config.go:309] "Starting node config controller"
	I1129 09:17:12.895967       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:17:12.895971       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:17:12.995652       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:17:12.995687       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:17:12.996502       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:17:12.996508       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2080eaa5b786c79ead07692c870ce9928ace57a47032f699d66882570b205513] <==
	I1129 09:17:12.167728       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:17:12.171243       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:17:12.171384       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:17:12.172723       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1129 09:17:12.173245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1129 09:17:12.172863       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1129 09:17:12.181894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:17:12.181984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:17:12.182079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:17:12.182164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:17:12.182757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:17:12.182863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:17:12.183727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:17:12.183927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:17:12.184042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:17:12.184329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:17:12.184639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:17:12.184691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:17:12.184780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:17:12.184962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:17:12.185112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:17:12.185154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:17:12.185369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:17:12.185521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1129 09:17:13.172260       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:17:16 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:16.148590     738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7896abb3-c3b1-4280-9b0c-76b64c1ecdc9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-26vff\" (UID: \"7896abb3-c3b1-4280-9b0c-76b64c1ecdc9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-26vff"
	Nov 29 09:17:16 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:16.148743     738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9srk\" (UniqueName: \"kubernetes.io/projected/7896abb3-c3b1-4280-9b0c-76b64c1ecdc9-kube-api-access-k9srk\") pod \"kubernetes-dashboard-855c9754f9-26vff\" (UID: \"7896abb3-c3b1-4280-9b0c-76b64c1ecdc9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-26vff"
	Nov 29 09:17:16 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:16.148789     738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbq84\" (UniqueName: \"kubernetes.io/projected/d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14-kube-api-access-sbq84\") pod \"dashboard-metrics-scraper-6ffb444bf9-4hblj\" (UID: \"d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj"
	Nov 29 09:17:16 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:16.148815     738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4hblj\" (UID: \"d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj"
	Nov 29 09:17:22 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:22.678617     738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-26vff" podStartSLOduration=3.453743984 podStartE2EDuration="6.678594202s" podCreationTimestamp="2025-11-29 09:17:16 +0000 UTC" firstStartedPulling="2025-11-29 09:17:16.426735187 +0000 UTC m=+7.312503419" lastFinishedPulling="2025-11-29 09:17:19.6515854 +0000 UTC m=+10.537353637" observedRunningTime="2025-11-29 09:17:20.323358019 +0000 UTC m=+11.209126260" watchObservedRunningTime="2025-11-29 09:17:22.678594202 +0000 UTC m=+13.564362449"
	Nov 29 09:17:23 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:23.309927     738 scope.go:117] "RemoveContainer" containerID="7ae6aa913848d1746e5e27c0db29a6cfcaac2bc794cb23b355bc43b3a082682f"
	Nov 29 09:17:24 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:24.314273     738 scope.go:117] "RemoveContainer" containerID="7ae6aa913848d1746e5e27c0db29a6cfcaac2bc794cb23b355bc43b3a082682f"
	Nov 29 09:17:24 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:24.314555     738 scope.go:117] "RemoveContainer" containerID="d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d"
	Nov 29 09:17:24 default-k8s-diff-port-632243 kubelet[738]: E1129 09:17:24.314740     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4hblj_kubernetes-dashboard(d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj" podUID="d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14"
	Nov 29 09:17:25 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:25.318249     738 scope.go:117] "RemoveContainer" containerID="d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d"
	Nov 29 09:17:25 default-k8s-diff-port-632243 kubelet[738]: E1129 09:17:25.318417     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4hblj_kubernetes-dashboard(d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj" podUID="d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14"
	Nov 29 09:17:28 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:28.340525     738 scope.go:117] "RemoveContainer" containerID="d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d"
	Nov 29 09:17:28 default-k8s-diff-port-632243 kubelet[738]: E1129 09:17:28.340764     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4hblj_kubernetes-dashboard(d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj" podUID="d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14"
	Nov 29 09:17:41 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:41.228769     738 scope.go:117] "RemoveContainer" containerID="d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d"
	Nov 29 09:17:41 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:41.361586     738 scope.go:117] "RemoveContainer" containerID="d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d"
	Nov 29 09:17:41 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:41.361818     738 scope.go:117] "RemoveContainer" containerID="7ef057717f2cb9bfee390527d1229b602af731a3770cbb94529401105f0694e2"
	Nov 29 09:17:41 default-k8s-diff-port-632243 kubelet[738]: E1129 09:17:41.362045     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4hblj_kubernetes-dashboard(d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj" podUID="d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14"
	Nov 29 09:17:43 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:43.369814     738 scope.go:117] "RemoveContainer" containerID="dcd5e71f1547b1e671741b763fc8bcd6c37b199a7a47b50bded35e37d88f15e5"
	Nov 29 09:17:48 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:48.339991     738 scope.go:117] "RemoveContainer" containerID="7ef057717f2cb9bfee390527d1229b602af731a3770cbb94529401105f0694e2"
	Nov 29 09:17:48 default-k8s-diff-port-632243 kubelet[738]: E1129 09:17:48.340268     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4hblj_kubernetes-dashboard(d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj" podUID="d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14"
	Nov 29 09:17:58 default-k8s-diff-port-632243 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 09:17:58 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:58.030180     738 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 29 09:17:58 default-k8s-diff-port-632243 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 09:17:58 default-k8s-diff-port-632243 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 29 09:17:58 default-k8s-diff-port-632243 systemd[1]: kubelet.service: Consumed 1.741s CPU time.
	
	
	==> kubernetes-dashboard [4886082fa2dbfffcedb9ba51af5ccb52f1828c3b9f03f1a2f251ec784b244659] <==
	2025/11/29 09:17:19 Starting overwatch
	2025/11/29 09:17:19 Using namespace: kubernetes-dashboard
	2025/11/29 09:17:19 Using in-cluster config to connect to apiserver
	2025/11/29 09:17:19 Using secret token for csrf signing
	2025/11/29 09:17:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 09:17:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 09:17:19 Successful initial request to the apiserver, version: v1.34.1
	2025/11/29 09:17:19 Generating JWE encryption key
	2025/11/29 09:17:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 09:17:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 09:17:19 Initializing JWE encryption key from synchronized object
	2025/11/29 09:17:19 Creating in-cluster Sidecar client
	2025/11/29 09:17:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 09:17:19 Serving insecurely on HTTP port: 9090
	2025/11/29 09:17:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a0a7d268774521215ec9a9e231f8cbff24ab751bb32017578f757b56b868382b] <==
	I1129 09:17:43.421527       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:17:43.430172       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:17:43.430237       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:17:43.432879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:46.887946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:51.148895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:54.747771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:57.801989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:00.824932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:00.830243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:18:00.830427       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:18:00.830573       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3d9080e2-1f84-4caa-8750-c2395a4c0f6c", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-632243_b75939db-fabd-43a5-bf52-db236e980f77 became leader
	I1129 09:18:00.830641       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-632243_b75939db-fabd-43a5-bf52-db236e980f77!
	W1129 09:18:00.833022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:00.836742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:18:00.931684       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-632243_b75939db-fabd-43a5-bf52-db236e980f77!
	
	
	==> storage-provisioner [dcd5e71f1547b1e671741b763fc8bcd6c37b199a7a47b50bded35e37d88f15e5] <==
	I1129 09:17:12.652238       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 09:17:42.655356       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-632243 -n default-k8s-diff-port-632243
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-632243 -n default-k8s-diff-port-632243: exit status 2 (374.678901ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-632243 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-632243
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-632243:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88",
	        "Created": "2025-11-29T09:16:00.909438015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 337290,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:17:02.786122524Z",
	            "FinishedAt": "2025-11-29T09:17:01.791647739Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88/hostname",
	        "HostsPath": "/var/lib/docker/containers/34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88/hosts",
	        "LogPath": "/var/lib/docker/containers/34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88/34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88-json.log",
	        "Name": "/default-k8s-diff-port-632243",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-632243:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-632243",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "34542347c69bda84e7d5a150f01f074a3f776313cbcfbd090bb64fe6de277d88",
	                "LowerDir": "/var/lib/docker/overlay2/7263fb3772af2f1b363fa16d989f215dd7f46480236fb7471fbfb55fcc94f1fb-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7263fb3772af2f1b363fa16d989f215dd7f46480236fb7471fbfb55fcc94f1fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7263fb3772af2f1b363fa16d989f215dd7f46480236fb7471fbfb55fcc94f1fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7263fb3772af2f1b363fa16d989f215dd7f46480236fb7471fbfb55fcc94f1fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-632243",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-632243/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-632243",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-632243",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-632243",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cf263793e0f0ee74ceb8db2201473b4f33060de791688a1ff4ee23ec22feed75",
	            "SandboxKey": "/var/run/docker/netns/cf263793e0f0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-632243": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7a23ed3dab8d4d6fb6f9edc51b6864da564467aa8f10cf2599da81a3bf2593e1",
	                    "EndpointID": "c0a55e71f7c94755d0fc30b355b6da6e868cf8770d96b3927d4b974a9d3b98e6",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "6e:61:8e:1e:ad:34",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-632243",
	                        "34542347c69b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-632243 -n default-k8s-diff-port-632243
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-632243 -n default-k8s-diff-port-632243: exit status 2 (410.840751ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-632243 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-632243 logs -n 25: (1.270560687s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-680646 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p no-preload-897274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p no-preload-897274 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-680646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-680646 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p no-preload-897274 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-160987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-632243 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-160987 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ stop    │ -p default-k8s-diff-port-632243 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-160987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-632243 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ old-k8s-version-680646 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-680646 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ no-preload-897274 image list --format=json                                                                                                                                                                                                    │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-897274 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-020433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p no-preload-897274                                                                                                                                                                                                                          │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ delete  │ -p no-preload-897274                                                                                                                                                                                                                          │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ default-k8s-diff-port-632243 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p default-k8s-diff-port-632243 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:17:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:17:32.750525  343912 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:17:32.750831  343912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:32.750854  343912 out.go:374] Setting ErrFile to fd 2...
	I1129 09:17:32.750859  343912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:32.751040  343912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:17:32.751569  343912 out.go:368] Setting JSON to false
	I1129 09:17:32.753086  343912 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3605,"bootTime":1764404248,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:17:32.753155  343912 start.go:143] virtualization: kvm guest
	I1129 09:17:32.755163  343912 out.go:179] * [newest-cni-020433] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:17:32.756656  343912 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:17:32.756692  343912 notify.go:221] Checking for updates...
	I1129 09:17:32.759425  343912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:17:32.760722  343912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:32.765362  343912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:17:32.766699  343912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:17:32.768011  343912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:17:32.769812  343912 config.go:182] Loaded profile config "default-k8s-diff-port-632243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.769952  343912 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.770081  343912 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.770208  343912 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:17:32.794655  343912 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:17:32.794775  343912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:32.856269  343912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 09:17:32.845151576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:32.856388  343912 docker.go:319] overlay module found
	I1129 09:17:32.858258  343912 out.go:179] * Using the docker driver based on user configuration
	I1129 09:17:32.859415  343912 start.go:309] selected driver: docker
	I1129 09:17:32.859434  343912 start.go:927] validating driver "docker" against <nil>
	I1129 09:17:32.859451  343912 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:17:32.860352  343912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:32.930751  343912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 09:17:32.91839311 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:32.930951  343912 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1129 09:17:32.930985  343912 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1129 09:17:32.931224  343912 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:17:32.933425  343912 out.go:179] * Using Docker driver with root privileges
	I1129 09:17:32.934824  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:32.934925  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:32.934944  343912 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:17:32.935044  343912 start.go:353] cluster config:
	{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:32.936354  343912 out.go:179] * Starting "newest-cni-020433" primary control-plane node in "newest-cni-020433" cluster
	I1129 09:17:32.937514  343912 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:17:32.938803  343912 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:17:32.940016  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:32.940051  343912 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:17:32.940062  343912 cache.go:65] Caching tarball of preloaded images
	I1129 09:17:32.940107  343912 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:17:32.940163  343912 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:17:32.940176  343912 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:17:32.940278  343912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:17:32.940301  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json: {Name:mk7d4da653b0e884b27837053cd3d354c3ff76e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:32.963727  343912 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:17:32.963754  343912 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:17:32.963777  343912 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:17:32.963830  343912 start.go:360] acquireMachinesLock for newest-cni-020433: {Name:mk6347901682a01c9d317c6a402722ce1e16792e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:17:32.963998  343912 start.go:364] duration metric: took 95.455µs to acquireMachinesLock for "newest-cni-020433"
	I1129 09:17:32.964029  343912 start.go:93] Provisioning new machine with config: &{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:17:32.964128  343912 start.go:125] createHost starting for "" (driver="docker")
	W1129 09:17:33.828970  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:35.829789  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:33.948277  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:36.448064  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	I1129 09:17:32.965989  343912 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:17:32.966316  343912 start.go:159] libmachine.API.Create for "newest-cni-020433" (driver="docker")
	I1129 09:17:32.966356  343912 client.go:173] LocalClient.Create starting
	I1129 09:17:32.966470  343912 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem
	I1129 09:17:32.966524  343912 main.go:143] libmachine: Decoding PEM data...
	I1129 09:17:32.966555  343912 main.go:143] libmachine: Parsing certificate...
	I1129 09:17:32.966626  343912 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem
	I1129 09:17:32.966654  343912 main.go:143] libmachine: Decoding PEM data...
	I1129 09:17:32.966670  343912 main.go:143] libmachine: Parsing certificate...
	I1129 09:17:32.967123  343912 cli_runner.go:164] Run: docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:17:32.987734  343912 cli_runner.go:211] docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:17:32.987872  343912 network_create.go:284] running [docker network inspect newest-cni-020433] to gather additional debugging logs...
	I1129 09:17:32.987905  343912 cli_runner.go:164] Run: docker network inspect newest-cni-020433
	W1129 09:17:33.007164  343912 cli_runner.go:211] docker network inspect newest-cni-020433 returned with exit code 1
	I1129 09:17:33.007194  343912 network_create.go:287] error running [docker network inspect newest-cni-020433]: docker network inspect newest-cni-020433: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-020433 not found
	I1129 09:17:33.007209  343912 network_create.go:289] output of [docker network inspect newest-cni-020433]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-020433 not found
	
	** /stderr **
	I1129 09:17:33.007343  343912 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:17:33.027663  343912 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-94fc752bc7a7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ed:43:e0:ad:5a} reservation:<nil>}
	I1129 09:17:33.028420  343912 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4cfc302f5d5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:73:ac:ba:18:bb} reservation:<nil>}
	I1129 09:17:33.029339  343912 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-05a73bbe16b8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:a9:af:00:78:ac} reservation:<nil>}
	I1129 09:17:33.030217  343912 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb6cd0}
	I1129 09:17:33.030243  343912 network_create.go:124] attempt to create docker network newest-cni-020433 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1129 09:17:33.030303  343912 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-020433 newest-cni-020433
	I1129 09:17:33.088543  343912 network_create.go:108] docker network newest-cni-020433 192.168.76.0/24 created
	I1129 09:17:33.088582  343912 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-020433" container
	I1129 09:17:33.088651  343912 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:17:33.110031  343912 cli_runner.go:164] Run: docker volume create newest-cni-020433 --label name.minikube.sigs.k8s.io=newest-cni-020433 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:17:33.131986  343912 oci.go:103] Successfully created a docker volume newest-cni-020433
	I1129 09:17:33.132086  343912 cli_runner.go:164] Run: docker run --rm --name newest-cni-020433-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-020433 --entrypoint /usr/bin/test -v newest-cni-020433:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:17:33.542784  343912 oci.go:107] Successfully prepared a docker volume newest-cni-020433
	I1129 09:17:33.542890  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:33.542904  343912 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:17:33.542963  343912 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-020433:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1129 09:17:38.328506  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:40.827427  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:38.452229  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:40.947913  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	I1129 09:17:38.398985  343912 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-020433:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.855972089s)
	I1129 09:17:38.399017  343912 kic.go:203] duration metric: took 4.856111068s to extract preloaded images to volume ...
	W1129 09:17:38.399145  343912 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:17:38.399190  343912 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:17:38.399238  343912 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:17:38.467132  343912 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-020433 --name newest-cni-020433 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-020433 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-020433 --network newest-cni-020433 --ip 192.168.76.2 --volume newest-cni-020433:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:17:39.064807  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Running}}
	I1129 09:17:39.085951  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:17:39.108652  343912 cli_runner.go:164] Run: docker exec newest-cni-020433 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:17:39.159933  343912 oci.go:144] the created container "newest-cni-020433" has a running status.
	I1129 09:17:39.159970  343912 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa...
	I1129 09:17:39.228797  343912 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:17:39.262675  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:17:39.285576  343912 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:17:39.285600  343912 kic_runner.go:114] Args: [docker exec --privileged newest-cni-020433 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:17:39.349410  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:17:39.369689  343912 machine.go:94] provisionDockerMachine start ...
	I1129 09:17:39.369803  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:39.396522  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:39.396932  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:39.396965  343912 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:17:39.397982  343912 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 09:17:42.550448  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-020433
	
	I1129 09:17:42.550474  343912 ubuntu.go:182] provisioning hostname "newest-cni-020433"
	I1129 09:17:42.550527  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:42.572133  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:42.572440  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:42.572461  343912 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-020433 && echo "newest-cni-020433" | sudo tee /etc/hostname
	I1129 09:17:42.733805  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-020433
	
	I1129 09:17:42.733897  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:42.754783  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:42.755144  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:42.755173  343912 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-020433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-020433/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-020433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:17:42.901064  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:17:42.901098  343912 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:17:42.901148  343912 ubuntu.go:190] setting up certificates
	I1129 09:17:42.901161  343912 provision.go:84] configureAuth start
	I1129 09:17:42.901231  343912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:17:42.921161  343912 provision.go:143] copyHostCerts
	I1129 09:17:42.921240  343912 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:17:42.921253  343912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:17:42.921344  343912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:17:42.921497  343912 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:17:42.921509  343912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:17:42.921568  343912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:17:42.921658  343912 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:17:42.921666  343912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:17:42.921693  343912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:17:42.921761  343912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.newest-cni-020433 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-020433]
	I1129 09:17:43.032466  343912 provision.go:177] copyRemoteCerts
	I1129 09:17:43.032525  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:17:43.032558  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.052823  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.158233  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:17:43.179138  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:17:43.198311  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:17:43.217652  343912 provision.go:87] duration metric: took 316.475572ms to configureAuth
	I1129 09:17:43.217682  343912 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:17:43.217917  343912 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:43.218034  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.237980  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:43.238211  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:43.238225  343912 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:17:43.535016  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:17:43.535041  343912 machine.go:97] duration metric: took 4.165320057s to provisionDockerMachine
	I1129 09:17:43.535052  343912 client.go:176] duration metric: took 10.568687757s to LocalClient.Create
	I1129 09:17:43.535073  343912 start.go:167] duration metric: took 10.568756916s to libmachine.API.Create "newest-cni-020433"
	I1129 09:17:43.535083  343912 start.go:293] postStartSetup for "newest-cni-020433" (driver="docker")
	I1129 09:17:43.535095  343912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:17:43.535160  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:17:43.535203  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.554574  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.661234  343912 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:17:43.665051  343912 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:17:43.665086  343912 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:17:43.665114  343912 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:17:43.665186  343912 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:17:43.665301  343912 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:17:43.665409  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:17:43.674165  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:43.696383  343912 start.go:296] duration metric: took 161.286243ms for postStartSetup
	I1129 09:17:43.696751  343912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:17:43.716301  343912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:17:43.716589  343912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:17:43.716640  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.735518  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.835307  343912 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:17:43.840211  343912 start.go:128] duration metric: took 10.876067654s to createHost
	I1129 09:17:43.840237  343912 start.go:83] releasing machines lock for "newest-cni-020433", held for 10.876224942s
	I1129 09:17:43.840309  343912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:17:43.860942  343912 ssh_runner.go:195] Run: cat /version.json
	I1129 09:17:43.860995  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.861019  343912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:17:43.861110  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.881396  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.881825  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:44.035348  343912 ssh_runner.go:195] Run: systemctl --version
	I1129 09:17:44.042398  343912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:17:44.079667  343912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:17:44.084668  343912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:17:44.084747  343912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:17:44.112611  343912 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:17:44.112638  343912 start.go:496] detecting cgroup driver to use...
	I1129 09:17:44.112675  343912 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:17:44.112721  343912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:17:44.130191  343912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:17:44.143333  343912 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:17:44.143407  343912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:17:44.160522  343912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:17:44.179005  343912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:17:44.264507  343912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:17:44.361596  343912 docker.go:234] disabling docker service ...
	I1129 09:17:44.361665  343912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:17:44.385098  343912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:17:44.399261  343912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:17:44.490353  343912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:17:44.577339  343912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:17:44.590606  343912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:17:44.606040  343912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:17:44.606113  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.617850  343912 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:17:44.617930  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.627795  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.637388  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.647881  343912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:17:44.657593  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.667667  343912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.683312  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.693180  343912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:17:44.701299  343912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:17:44.709519  343912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:44.789707  343912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:17:44.946719  343912 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:17:44.946786  343912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:17:44.950988  343912 start.go:564] Will wait 60s for crictl version
	I1129 09:17:44.951061  343912 ssh_runner.go:195] Run: which crictl
	I1129 09:17:44.954897  343912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:17:44.981273  343912 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:17:44.981355  343912 ssh_runner.go:195] Run: crio --version
	I1129 09:17:45.010241  343912 ssh_runner.go:195] Run: crio --version
	I1129 09:17:45.041932  343912 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:17:45.043598  343912 cli_runner.go:164] Run: docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:17:45.064493  343912 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 09:17:45.068916  343912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:45.081636  343912 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1129 09:17:43.447332  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	I1129 09:17:44.449613  336858 pod_ready.go:94] pod "coredns-66bc5c9577-z4m7c" is "Ready"
	I1129 09:17:44.449647  336858 pod_ready.go:86] duration metric: took 31.007906695s for pod "coredns-66bc5c9577-z4m7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.452244  336858 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.456751  336858 pod_ready.go:94] pod "etcd-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:44.456779  336858 pod_ready.go:86] duration metric: took 4.509231ms for pod "etcd-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.458972  336858 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.464014  336858 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:44.464045  336858 pod_ready.go:86] duration metric: took 5.045626ms for pod "kube-apiserver-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.466444  336858 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.645988  336858 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:44.646021  336858 pod_ready.go:86] duration metric: took 179.551463ms for pod "kube-controller-manager-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.845460  336858 pod_ready.go:83] waiting for pod "kube-proxy-p2nf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.245518  336858 pod_ready.go:94] pod "kube-proxy-p2nf7" is "Ready"
	I1129 09:17:45.245548  336858 pod_ready.go:86] duration metric: took 400.053767ms for pod "kube-proxy-p2nf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.445969  336858 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.847024  336858 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:45.847054  336858 pod_ready.go:86] duration metric: took 401.056115ms for pod "kube-scheduler-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.847067  336858 pod_ready.go:40] duration metric: took 32.409409019s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:45.894722  336858 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:17:45.896514  336858 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-632243" cluster and "default" namespace by default
	W1129 09:17:42.828310  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:44.828378  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	I1129 09:17:45.082734  343912 kubeadm.go:884] updating cluster {Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:17:45.082902  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:45.082966  343912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:45.116711  343912 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:45.116737  343912 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:17:45.116794  343912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:45.143455  343912 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:45.143477  343912 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:17:45.143484  343912 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 09:17:45.143562  343912 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-020433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:17:45.143624  343912 ssh_runner.go:195] Run: crio config
	I1129 09:17:45.191199  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:45.191226  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:45.191244  343912 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1129 09:17:45.191264  343912 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-020433 NodeName:newest-cni-020433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:17:45.191372  343912 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-020433"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:17:45.191438  343912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:17:45.199969  343912 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:17:45.200043  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:17:45.208777  343912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 09:17:45.222978  343912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:17:45.238915  343912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1129 09:17:45.253505  343912 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:17:45.257546  343912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:45.269034  343912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:45.354518  343912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:17:45.382355  343912 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433 for IP: 192.168.76.2
	I1129 09:17:45.382379  343912 certs.go:195] generating shared ca certs ...
	I1129 09:17:45.382407  343912 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.382577  343912 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:17:45.382636  343912 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:17:45.382650  343912 certs.go:257] generating profile certs ...
	I1129 09:17:45.382718  343912 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key
	I1129 09:17:45.382739  343912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.crt with IP's: []
	I1129 09:17:45.531926  343912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.crt ...
	I1129 09:17:45.531957  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.crt: {Name:mkeb17feaf8ba6750a01bd0a1f0441d4154bc65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.532140  343912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key ...
	I1129 09:17:45.532151  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key: {Name:mke1454a7dc3fbfdd29bdb836050690bcbb7394e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.532230  343912 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70
	I1129 09:17:45.532247  343912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1129 09:17:45.624876  343912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70 ...
	I1129 09:17:45.624908  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70: {Name:mk7ef25787741e084b6a866e43c94e1e8fef637a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.625077  343912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70 ...
	I1129 09:17:45.625090  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70: {Name:mk1ecd69640eeb4a11bb5f1e1ff7ab99459cb558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.625222  343912 certs.go:382] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70 -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt
	I1129 09:17:45.625303  343912 certs.go:386] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70 -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key
	I1129 09:17:45.625381  343912 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key
	I1129 09:17:45.625401  343912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt with IP's: []
	I1129 09:17:45.648826  343912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt ...
	I1129 09:17:45.648864  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt: {Name:mk66c6222d92d3d2bb033717f49fc6858d0a9367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.649040  343912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key ...
	I1129 09:17:45.649052  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key: {Name:mk559719a3cba034552025e578cadb28054704f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.649223  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:17:45.649259  343912 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:17:45.649269  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:17:45.649291  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:17:45.649314  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:17:45.649337  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:17:45.649376  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:45.649920  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:17:45.669435  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:17:45.688777  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:17:45.707612  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:17:45.726954  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:17:45.745570  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 09:17:45.763773  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:17:45.781717  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:17:45.799936  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:17:45.820108  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:17:45.839214  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:17:45.859643  343912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:17:45.874007  343912 ssh_runner.go:195] Run: openssl version
	I1129 09:17:45.880775  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:17:45.890438  343912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:17:45.894494  343912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:17:45.894554  343912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:17:45.934499  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:17:45.944013  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:17:45.953676  343912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:17:45.957999  343912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:17:45.958047  343912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:17:45.998219  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:17:46.008105  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:17:46.018512  343912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:46.022778  343912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:46.022855  343912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:46.060278  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:17:46.069685  343912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:17:46.073627  343912 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:17:46.073677  343912 kubeadm.go:401] StartCluster: {Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:46.073751  343912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:17:46.073796  343912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:17:46.102729  343912 cri.go:89] found id: ""
	I1129 09:17:46.102806  343912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:17:46.111499  343912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:17:46.120045  343912 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:17:46.120110  343912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:17:46.128326  343912 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:17:46.128366  343912 kubeadm.go:158] found existing configuration files:
	
	I1129 09:17:46.128413  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:17:46.136677  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:17:46.136741  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:17:46.144727  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:17:46.152908  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:17:46.152971  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:17:46.161300  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:17:46.170050  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:17:46.170117  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:17:46.179094  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:17:46.190258  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:17:46.190325  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:17:46.200333  343912 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:17:46.284775  343912 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 09:17:46.350549  343912 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1129 09:17:47.327775  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:49.327943  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	I1129 09:17:49.827724  336547 pod_ready.go:94] pod "coredns-66bc5c9577-ptx67" is "Ready"
	I1129 09:17:49.827757  336547 pod_ready.go:86] duration metric: took 36.505830154s for pod "coredns-66bc5c9577-ptx67" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.830193  336547 pod_ready.go:83] waiting for pod "etcd-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.834087  336547 pod_ready.go:94] pod "etcd-embed-certs-160987" is "Ready"
	I1129 09:17:49.834117  336547 pod_ready.go:86] duration metric: took 3.892584ms for pod "etcd-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.836236  336547 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.840124  336547 pod_ready.go:94] pod "kube-apiserver-embed-certs-160987" is "Ready"
	I1129 09:17:49.840148  336547 pod_ready.go:86] duration metric: took 3.889352ms for pod "kube-apiserver-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.842042  336547 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.026423  336547 pod_ready.go:94] pod "kube-controller-manager-embed-certs-160987" is "Ready"
	I1129 09:17:50.026453  336547 pod_ready.go:86] duration metric: took 184.390727ms for pod "kube-controller-manager-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.225618  336547 pod_ready.go:83] waiting for pod "kube-proxy-57l9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.626123  336547 pod_ready.go:94] pod "kube-proxy-57l9h" is "Ready"
	I1129 09:17:50.626149  336547 pod_ready.go:86] duration metric: took 400.500945ms for pod "kube-proxy-57l9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.826449  336547 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:51.226295  336547 pod_ready.go:94] pod "kube-scheduler-embed-certs-160987" is "Ready"
	I1129 09:17:51.226329  336547 pod_ready.go:86] duration metric: took 399.854281ms for pod "kube-scheduler-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:51.226346  336547 pod_ready.go:40] duration metric: took 37.909395781s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:51.285055  336547 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:17:51.286778  336547 out.go:179] * Done! kubectl is now configured to use "embed-certs-160987" cluster and "default" namespace by default
	I1129 09:17:56.491067  343912 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:17:56.491128  343912 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:17:56.491204  343912 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:17:56.491252  343912 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:17:56.491321  343912 kubeadm.go:319] OS: Linux
	I1129 09:17:56.491400  343912 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:17:56.491441  343912 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:17:56.491502  343912 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:17:56.491558  343912 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:17:56.491602  343912 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:17:56.491642  343912 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:17:56.491683  343912 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:17:56.491733  343912 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:17:56.491834  343912 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:17:56.491984  343912 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:17:56.492110  343912 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:17:56.492184  343912 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:17:56.493947  343912 out.go:252]   - Generating certificates and keys ...
	I1129 09:17:56.494037  343912 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:17:56.494134  343912 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:17:56.494235  343912 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:17:56.494315  343912 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:17:56.494392  343912 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:17:56.494466  343912 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:17:56.494546  343912 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:17:56.494718  343912 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-020433] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:17:56.494781  343912 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:17:56.494923  343912 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-020433] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:17:56.495006  343912 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:17:56.495078  343912 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:17:56.495157  343912 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:17:56.495234  343912 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:17:56.495280  343912 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:17:56.495370  343912 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:17:56.495457  343912 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:17:56.495570  343912 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:17:56.495624  343912 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:17:56.495696  343912 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:17:56.495760  343912 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:17:56.497322  343912 out.go:252]   - Booting up control plane ...
	I1129 09:17:56.497460  343912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:17:56.497563  343912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:17:56.497652  343912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:17:56.497741  343912 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:17:56.497818  343912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:17:56.497976  343912 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:17:56.498111  343912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:17:56.498169  343912 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:17:56.498335  343912 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:17:56.498461  343912 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:17:56.498530  343912 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.935954ms
	I1129 09:17:56.498616  343912 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:17:56.498731  343912 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1129 09:17:56.498879  343912 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:17:56.498988  343912 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 09:17:56.499073  343912 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.504475511s
	I1129 09:17:56.499172  343912 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.695464789s
	I1129 09:17:56.499266  343912 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501879872s
	I1129 09:17:56.499440  343912 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:17:56.499624  343912 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:17:56.499691  343912 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:17:56.500020  343912 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-020433 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:17:56.500135  343912 kubeadm.go:319] [bootstrap-token] Using token: f82gs2.l4bciq1r030lvxp0
	I1129 09:17:56.501325  343912 out.go:252]   - Configuring RBAC rules ...
	I1129 09:17:56.501453  343912 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:17:56.501553  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:17:56.501684  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:17:56.501866  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:17:56.502025  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:17:56.502108  343912 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:17:56.502227  343912 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:17:56.502273  343912 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:17:56.502315  343912 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:17:56.502321  343912 kubeadm.go:319] 
	I1129 09:17:56.502376  343912 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:17:56.502381  343912 kubeadm.go:319] 
	I1129 09:17:56.502451  343912 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:17:56.502460  343912 kubeadm.go:319] 
	I1129 09:17:56.502481  343912 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:17:56.502532  343912 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:17:56.502576  343912 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:17:56.502586  343912 kubeadm.go:319] 
	I1129 09:17:56.502629  343912 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:17:56.502639  343912 kubeadm.go:319] 
	I1129 09:17:56.502689  343912 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:17:56.502697  343912 kubeadm.go:319] 
	I1129 09:17:56.502745  343912 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:17:56.502810  343912 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:17:56.502890  343912 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:17:56.502897  343912 kubeadm.go:319] 
	I1129 09:17:56.502971  343912 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:17:56.503057  343912 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:17:56.503064  343912 kubeadm.go:319] 
	I1129 09:17:56.503140  343912 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f82gs2.l4bciq1r030lvxp0 \
	I1129 09:17:56.503224  343912 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 \
	I1129 09:17:56.503244  343912 kubeadm.go:319] 	--control-plane 
	I1129 09:17:56.503252  343912 kubeadm.go:319] 
	I1129 09:17:56.503335  343912 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:17:56.503344  343912 kubeadm.go:319] 
	I1129 09:17:56.503417  343912 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f82gs2.l4bciq1r030lvxp0 \
	I1129 09:17:56.503523  343912 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 
	I1129 09:17:56.503547  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:56.503557  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:56.504793  343912 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:17:56.505922  343912 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:17:56.510364  343912 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 09:17:56.510383  343912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:17:56.523891  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:17:56.771723  343912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:17:56.771759  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:56.771857  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-020433 minikube.k8s.io/updated_at=2025_11_29T09_17_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=newest-cni-020433 minikube.k8s.io/primary=true
	I1129 09:17:56.870386  343912 ops.go:34] apiserver oom_adj: -16
	I1129 09:17:56.870493  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:57.370894  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:57.870685  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:58.370644  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:58.870909  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:59.371245  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:59.871577  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:00.370624  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:00.871043  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:01.370798  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:01.470229  343912 kubeadm.go:1114] duration metric: took 4.69851702s to wait for elevateKubeSystemPrivileges
	I1129 09:18:01.470353  343912 kubeadm.go:403] duration metric: took 15.396675728s to StartCluster
	I1129 09:18:01.470403  343912 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:01.470526  343912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:18:01.473161  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:01.473501  343912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:18:01.473529  343912 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:18:01.473595  343912 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-020433"
	I1129 09:18:01.473611  343912 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-020433"
	I1129 09:18:01.473639  343912 host.go:66] Checking if "newest-cni-020433" exists ...
	I1129 09:18:01.473786  343912 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:01.473872  343912 addons.go:70] Setting default-storageclass=true in profile "newest-cni-020433"
	I1129 09:18:01.473890  343912 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-020433"
	I1129 09:18:01.473505  343912 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:18:01.474234  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:01.474263  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:01.477129  343912 out.go:179] * Verifying Kubernetes components...
	I1129 09:18:01.478510  343912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:18:01.508488  343912 addons.go:239] Setting addon default-storageclass=true in "newest-cni-020433"
	I1129 09:18:01.508544  343912 host.go:66] Checking if "newest-cni-020433" exists ...
	I1129 09:18:01.509017  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:01.512765  343912 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:18:01.513878  343912 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:18:01.513901  343912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:18:01.513969  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:01.548536  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:01.549743  343912 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:18:01.549766  343912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:18:01.549824  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:01.577630  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:01.603306  343912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:18:01.652699  343912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:18:01.679084  343912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:18:01.710552  343912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:18:01.806299  343912 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1129 09:18:01.808103  343912 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:18:01.808185  343912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:18:02.050418  343912 api_server.go:72] duration metric: took 576.481112ms to wait for apiserver process to appear ...
	I1129 09:18:02.050443  343912 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:18:02.050462  343912 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:18:02.057555  343912 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:18:02.058665  343912 api_server.go:141] control plane version: v1.34.1
	I1129 09:18:02.058689  343912 api_server.go:131] duration metric: took 8.238938ms to wait for apiserver health ...
	I1129 09:18:02.058698  343912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:18:02.059528  343912 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Nov 29 09:17:23 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:23.360534378Z" level=info msg="Started container" PID=1739 containerID=d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj/dashboard-metrics-scraper id=a648d8c3-6665-44a7-a8df-48663fdb5dae name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bdafc0c56077e5d4103e283843edd3c82a08d58226b0e90b75aa0f1513d6e69
	Nov 29 09:17:24 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:24.31587653Z" level=info msg="Removing container: 7ae6aa913848d1746e5e27c0db29a6cfcaac2bc794cb23b355bc43b3a082682f" id=001b74c8-6edb-4b3b-af14-3d74c43e32ac name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:24 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:24.326027348Z" level=info msg="Removed container 7ae6aa913848d1746e5e27c0db29a6cfcaac2bc794cb23b355bc43b3a082682f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj/dashboard-metrics-scraper" id=001b74c8-6edb-4b3b-af14-3d74c43e32ac name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.229481517Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0f293ccd-59e2-4bf1-857b-b70db019304a name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.230606897Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0c11792f-e12a-4f5f-9e37-632d5b3cbb82 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.231683021Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj/dashboard-metrics-scraper" id=301e9fb5-ac8d-4bbe-80ab-40ba677962e0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.231834257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.238198124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.23881064Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.272580682Z" level=info msg="Created container 7ef057717f2cb9bfee390527d1229b602af731a3770cbb94529401105f0694e2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj/dashboard-metrics-scraper" id=301e9fb5-ac8d-4bbe-80ab-40ba677962e0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.273288929Z" level=info msg="Starting container: 7ef057717f2cb9bfee390527d1229b602af731a3770cbb94529401105f0694e2" id=55d633d1-bacf-4a2a-bd40-bc384f669c82 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.275555488Z" level=info msg="Started container" PID=1749 containerID=7ef057717f2cb9bfee390527d1229b602af731a3770cbb94529401105f0694e2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj/dashboard-metrics-scraper id=55d633d1-bacf-4a2a-bd40-bc384f669c82 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1bdafc0c56077e5d4103e283843edd3c82a08d58226b0e90b75aa0f1513d6e69
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.362960577Z" level=info msg="Removing container: d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d" id=702dfba5-eb5b-442d-b344-027e77d7d69b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:41 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:41.374244236Z" level=info msg="Removed container d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj/dashboard-metrics-scraper" id=702dfba5-eb5b-442d-b344-027e77d7d69b name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.370265111Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a63f97f8-27bc-444b-b6b8-68c94af10b16 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.371226685Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e4bc7ef7-02ae-47c7-9086-7553a76a670b name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.372337959Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=0593753c-aa8c-43e1-8210-ede3ebd38be5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.372475112Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.37672276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.376878Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/93d69e0b9f9d66df0ad841ff2a79cd42135575dd212e55e93fbdaf985a152e92/merged/etc/passwd: no such file or directory"
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.376896978Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/93d69e0b9f9d66df0ad841ff2a79cd42135575dd212e55e93fbdaf985a152e92/merged/etc/group: no such file or directory"
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.37714023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.405930735Z" level=info msg="Created container a0a7d268774521215ec9a9e231f8cbff24ab751bb32017578f757b56b868382b: kube-system/storage-provisioner/storage-provisioner" id=0593753c-aa8c-43e1-8210-ede3ebd38be5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.406621918Z" level=info msg="Starting container: a0a7d268774521215ec9a9e231f8cbff24ab751bb32017578f757b56b868382b" id=dadb3733-9816-4dff-8f94-4a61c8f06dd1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:43 default-k8s-diff-port-632243 crio[578]: time="2025-11-29T09:17:43.408746956Z" level=info msg="Started container" PID=1763 containerID=a0a7d268774521215ec9a9e231f8cbff24ab751bb32017578f757b56b868382b description=kube-system/storage-provisioner/storage-provisioner id=dadb3733-9816-4dff-8f94-4a61c8f06dd1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cffef6829c93519395a268b4f59ce4681b356db4fddd42df826f27db934709a4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	a0a7d26877452       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   cffef6829c935       storage-provisioner                                    kube-system
	7ef057717f2cb       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   1bdafc0c56077       dashboard-metrics-scraper-6ffb444bf9-4hblj             kubernetes-dashboard
	4886082fa2dbf       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   a3762881ec276       kubernetes-dashboard-855c9754f9-26vff                  kubernetes-dashboard
	f86d4e903149f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   e1de2ba52ea07       coredns-66bc5c9577-z4m7c                               kube-system
	3d542c139be2a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   a18a153401910       busybox                                                default
	dcd5e71f1547b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   cffef6829c935       storage-provisioner                                    kube-system
	6d9c6a1fe80d1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   c6663de35bbfd       kube-proxy-p2nf7                                       kube-system
	92732529bb831       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   9c434856825f3       kindnet-tpstm                                          kube-system
	b13c8a23740ac       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           52 seconds ago      Running             kube-apiserver              0                   8552a74ed5f1a       kube-apiserver-default-k8s-diff-port-632243            kube-system
	2080eaa5b786c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           52 seconds ago      Running             kube-scheduler              0                   90e78a2184480       kube-scheduler-default-k8s-diff-port-632243            kube-system
	c75e80b4e2dbb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           52 seconds ago      Running             etcd                        0                   20a5aaa0b97bc       etcd-default-k8s-diff-port-632243                      kube-system
	be8adeee9f904       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           52 seconds ago      Running             kube-controller-manager     0                   addd317cf1d68       kube-controller-manager-default-k8s-diff-port-632243   kube-system
	
	
	==> coredns [f86d4e903149f37fd40a69cf9fdd0675519e20733587ef981faf69f6c60584c4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45112 - 19302 "HINFO IN 5917627528214500071.6117096667480374381. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014694143s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-632243
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-632243
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=default-k8s-diff-port-632243
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_16_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:16:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-632243
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:17:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-632243
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                bcdb7d0a-1357-4cf0-985d-43631a533a4d
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-z4m7c                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-default-k8s-diff-port-632243                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         107s
	  kube-system                 kindnet-tpstm                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-default-k8s-diff-port-632243             250m (3%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-632243    200m (2%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-p2nf7                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-default-k8s-diff-port-632243             100m (1%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4hblj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-26vff                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  107s               kubelet          Node default-k8s-diff-port-632243 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s               kubelet          Node default-k8s-diff-port-632243 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s               kubelet          Node default-k8s-diff-port-632243 status is now: NodeHasSufficientPID
	  Normal  Starting                 107s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           103s               node-controller  Node default-k8s-diff-port-632243 event: Registered Node default-k8s-diff-port-632243 in Controller
	  Normal  NodeReady                91s                kubelet          Node default-k8s-diff-port-632243 status is now: NodeReady
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)  kubelet          Node default-k8s-diff-port-632243 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)  kubelet          Node default-k8s-diff-port-632243 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)  kubelet          Node default-k8s-diff-port-632243 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node default-k8s-diff-port-632243 event: Registered Node default-k8s-diff-port-632243 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 e1 87 e4 84 ae 08 06
	[  +0.002640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[ +14.778105] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +13.520989] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 32 aa eb 52 77 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +14.762906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df ae 93 da f6 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[Nov29 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[ +13.297626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	[  +7.390906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 68 f3 3d 3c e9 08 06
	[  +0.000436] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[  +7.145922] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea c3 2f 2a a3 01 08 06
	[  +0.000415] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	
	
	==> etcd [c75e80b4e2dbb59237ca7e83b6a87a80d377951cce4c561324de39b3ea24a433] <==
	{"level":"warn","ts":"2025-11-29T09:17:11.087134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.100034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.110759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.123176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.132623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.138779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.151802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.158980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.173342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.185151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.194495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.203750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.212826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.222578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.231761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.244597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.255325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.266536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.274448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.285250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.331453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.336433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.344618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.353409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.404797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34660","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:02 up  1:00,  0 user,  load average: 3.06, 3.68, 2.49
	Linux default-k8s-diff-port-632243 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [92732529bb831b3e850239c923cd55b6ba3e6316b7e319567d1bb7ed6abde79e] <==
	I1129 09:17:12.887630       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:17:12.887916       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1129 09:17:12.888138       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:17:12.888159       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:17:12.888190       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:17:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:17:13.182668       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:17:13.182725       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:17:13.182739       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:17:13.283201       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:17:13.682886       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:17:13.682924       1 metrics.go:72] Registering metrics
	I1129 09:17:13.682989       1 controller.go:711] "Syncing nftables rules"
	I1129 09:17:23.088600       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:17:23.088673       1 main.go:301] handling current node
	I1129 09:17:33.093051       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:17:33.093096       1 main.go:301] handling current node
	I1129 09:17:43.089426       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:17:43.089466       1 main.go:301] handling current node
	I1129 09:17:53.091977       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:17:53.092031       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b13c8a23740acd98b7a6a7244c86241544729c4895bf870e9bb842604451a0f4] <==
	I1129 09:17:12.209645       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 09:17:12.209730       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 09:17:12.216949       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1129 09:17:12.217055       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 09:17:12.217105       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:17:12.234219       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1129 09:17:12.236485       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 09:17:12.252755       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1129 09:17:12.252908       1 policy_source.go:240] refreshing policies
	I1129 09:17:12.262702       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 09:17:12.263047       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 09:17:12.263112       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 09:17:12.275794       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:17:12.291551       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:17:12.332786       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:17:12.634871       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:17:12.688969       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:17:12.721876       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:17:12.732808       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:17:12.798653       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.24.253"}
	I1129 09:17:12.813777       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.88.109"}
	I1129 09:17:13.066271       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:17:15.570240       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:17:15.920986       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:17:16.172232       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [be8adeee9f904b03165bd07f7f9279fad60f6e70a12d988e651be3f8e0e5974c] <==
	I1129 09:17:15.544486       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:17:15.547299       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 09:17:15.564865       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 09:17:15.564876       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 09:17:15.565214       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 09:17:15.565453       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:17:15.565482       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 09:17:15.565504       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 09:17:15.565920       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 09:17:15.567168       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:17:15.567679       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:17:15.572988       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:17:15.573014       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:17:15.573027       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 09:17:15.581905       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 09:17:15.584264       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 09:17:15.584265       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:17:15.585388       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:17:15.585432       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:17:15.586684       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 09:17:15.588897       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:17:15.591163       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:17:15.593499       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:17:15.605926       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:17:15.605992       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	
	
	==> kube-proxy [6d9c6a1fe80d134c1649a8574ce4c7fed4aca61a3d0743bc1723d61b82585852] <==
	I1129 09:17:12.681707       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:17:12.753902       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:17:12.855529       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:17:12.856908       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1129 09:17:12.857196       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:17:12.885798       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:17:12.885875       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:17:12.893207       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:17:12.893654       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:17:12.893725       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:17:12.895394       1 config.go:200] "Starting service config controller"
	I1129 09:17:12.895424       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:17:12.895265       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:17:12.895443       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:17:12.895715       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:17:12.895911       1 config.go:309] "Starting node config controller"
	I1129 09:17:12.895967       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:17:12.895971       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:17:12.995652       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:17:12.995687       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:17:12.996502       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:17:12.996508       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2080eaa5b786c79ead07692c870ce9928ace57a47032f699d66882570b205513] <==
	I1129 09:17:12.167728       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:17:12.171243       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:17:12.171384       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:17:12.172723       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1129 09:17:12.173245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1129 09:17:12.172863       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1129 09:17:12.181894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:17:12.181984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:17:12.182079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:17:12.182164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:17:12.182757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:17:12.182863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:17:12.183727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:17:12.183927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:17:12.184042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:17:12.184329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:17:12.184639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:17:12.184691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:17:12.184780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:17:12.184962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:17:12.185112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:17:12.185154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:17:12.185369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:17:12.185521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1129 09:17:13.172260       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:17:16 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:16.148590     738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7896abb3-c3b1-4280-9b0c-76b64c1ecdc9-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-26vff\" (UID: \"7896abb3-c3b1-4280-9b0c-76b64c1ecdc9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-26vff"
	Nov 29 09:17:16 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:16.148743     738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9srk\" (UniqueName: \"kubernetes.io/projected/7896abb3-c3b1-4280-9b0c-76b64c1ecdc9-kube-api-access-k9srk\") pod \"kubernetes-dashboard-855c9754f9-26vff\" (UID: \"7896abb3-c3b1-4280-9b0c-76b64c1ecdc9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-26vff"
	Nov 29 09:17:16 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:16.148789     738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbq84\" (UniqueName: \"kubernetes.io/projected/d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14-kube-api-access-sbq84\") pod \"dashboard-metrics-scraper-6ffb444bf9-4hblj\" (UID: \"d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj"
	Nov 29 09:17:16 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:16.148815     738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4hblj\" (UID: \"d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj"
	Nov 29 09:17:22 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:22.678617     738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-26vff" podStartSLOduration=3.453743984 podStartE2EDuration="6.678594202s" podCreationTimestamp="2025-11-29 09:17:16 +0000 UTC" firstStartedPulling="2025-11-29 09:17:16.426735187 +0000 UTC m=+7.312503419" lastFinishedPulling="2025-11-29 09:17:19.6515854 +0000 UTC m=+10.537353637" observedRunningTime="2025-11-29 09:17:20.323358019 +0000 UTC m=+11.209126260" watchObservedRunningTime="2025-11-29 09:17:22.678594202 +0000 UTC m=+13.564362449"
	Nov 29 09:17:23 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:23.309927     738 scope.go:117] "RemoveContainer" containerID="7ae6aa913848d1746e5e27c0db29a6cfcaac2bc794cb23b355bc43b3a082682f"
	Nov 29 09:17:24 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:24.314273     738 scope.go:117] "RemoveContainer" containerID="7ae6aa913848d1746e5e27c0db29a6cfcaac2bc794cb23b355bc43b3a082682f"
	Nov 29 09:17:24 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:24.314555     738 scope.go:117] "RemoveContainer" containerID="d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d"
	Nov 29 09:17:24 default-k8s-diff-port-632243 kubelet[738]: E1129 09:17:24.314740     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4hblj_kubernetes-dashboard(d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj" podUID="d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14"
	Nov 29 09:17:25 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:25.318249     738 scope.go:117] "RemoveContainer" containerID="d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d"
	Nov 29 09:17:25 default-k8s-diff-port-632243 kubelet[738]: E1129 09:17:25.318417     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4hblj_kubernetes-dashboard(d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj" podUID="d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14"
	Nov 29 09:17:28 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:28.340525     738 scope.go:117] "RemoveContainer" containerID="d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d"
	Nov 29 09:17:28 default-k8s-diff-port-632243 kubelet[738]: E1129 09:17:28.340764     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4hblj_kubernetes-dashboard(d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj" podUID="d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14"
	Nov 29 09:17:41 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:41.228769     738 scope.go:117] "RemoveContainer" containerID="d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d"
	Nov 29 09:17:41 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:41.361586     738 scope.go:117] "RemoveContainer" containerID="d9bf366d8ee10b072a7c9c6fa1d3c7fd589a7189275bd7b0bd93e19de4752a2d"
	Nov 29 09:17:41 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:41.361818     738 scope.go:117] "RemoveContainer" containerID="7ef057717f2cb9bfee390527d1229b602af731a3770cbb94529401105f0694e2"
	Nov 29 09:17:41 default-k8s-diff-port-632243 kubelet[738]: E1129 09:17:41.362045     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4hblj_kubernetes-dashboard(d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj" podUID="d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14"
	Nov 29 09:17:43 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:43.369814     738 scope.go:117] "RemoveContainer" containerID="dcd5e71f1547b1e671741b763fc8bcd6c37b199a7a47b50bded35e37d88f15e5"
	Nov 29 09:17:48 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:48.339991     738 scope.go:117] "RemoveContainer" containerID="7ef057717f2cb9bfee390527d1229b602af731a3770cbb94529401105f0694e2"
	Nov 29 09:17:48 default-k8s-diff-port-632243 kubelet[738]: E1129 09:17:48.340268     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4hblj_kubernetes-dashboard(d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4hblj" podUID="d1f2d9a9-4051-4b6b-8eb7-1d65c5e96e14"
	Nov 29 09:17:58 default-k8s-diff-port-632243 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 09:17:58 default-k8s-diff-port-632243 kubelet[738]: I1129 09:17:58.030180     738 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 29 09:17:58 default-k8s-diff-port-632243 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 09:17:58 default-k8s-diff-port-632243 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 29 09:17:58 default-k8s-diff-port-632243 systemd[1]: kubelet.service: Consumed 1.741s CPU time.
	
	
	==> kubernetes-dashboard [4886082fa2dbfffcedb9ba51af5ccb52f1828c3b9f03f1a2f251ec784b244659] <==
	2025/11/29 09:17:19 Starting overwatch
	2025/11/29 09:17:19 Using namespace: kubernetes-dashboard
	2025/11/29 09:17:19 Using in-cluster config to connect to apiserver
	2025/11/29 09:17:19 Using secret token for csrf signing
	2025/11/29 09:17:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 09:17:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 09:17:19 Successful initial request to the apiserver, version: v1.34.1
	2025/11/29 09:17:19 Generating JWE encryption key
	2025/11/29 09:17:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 09:17:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 09:17:19 Initializing JWE encryption key from synchronized object
	2025/11/29 09:17:19 Creating in-cluster Sidecar client
	2025/11/29 09:17:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 09:17:19 Serving insecurely on HTTP port: 9090
	2025/11/29 09:17:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a0a7d268774521215ec9a9e231f8cbff24ab751bb32017578f757b56b868382b] <==
	I1129 09:17:43.421527       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:17:43.430172       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:17:43.430237       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:17:43.432879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:46.887946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:51.148895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:54.747771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:57.801989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:00.824932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:00.830243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:18:00.830427       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:18:00.830573       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3d9080e2-1f84-4caa-8750-c2395a4c0f6c", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-632243_b75939db-fabd-43a5-bf52-db236e980f77 became leader
	I1129 09:18:00.830641       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-632243_b75939db-fabd-43a5-bf52-db236e980f77!
	W1129 09:18:00.833022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:00.836742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:18:00.931684       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-632243_b75939db-fabd-43a5-bf52-db236e980f77!
	W1129 09:18:02.839671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:02.844123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dcd5e71f1547b1e671741b763fc8bcd6c37b199a7a47b50bded35e37d88f15e5] <==
	I1129 09:17:12.652238       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 09:17:42.655356       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-632243 -n default-k8s-diff-port-632243
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-632243 -n default-k8s-diff-port-632243: exit status 2 (401.046291ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-632243 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-020433 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-020433 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (269.135937ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-020433 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-020433
helpers_test.go:243: (dbg) docker inspect newest-cni-020433:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063",
	        "Created": "2025-11-29T09:17:38.486313312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 346009,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:17:38.808926494Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063/hostname",
	        "HostsPath": "/var/lib/docker/containers/a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063/hosts",
	        "LogPath": "/var/lib/docker/containers/a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063/a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063-json.log",
	        "Name": "/newest-cni-020433",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-020433:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-020433",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063",
	                "LowerDir": "/var/lib/docker/overlay2/a8a6ba38910989b11fc84ca9f5e0a6bd875cd888d1b48820e429d717fc735951-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8a6ba38910989b11fc84ca9f5e0a6bd875cd888d1b48820e429d717fc735951/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8a6ba38910989b11fc84ca9f5e0a6bd875cd888d1b48820e429d717fc735951/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8a6ba38910989b11fc84ca9f5e0a6bd875cd888d1b48820e429d717fc735951/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-020433",
	                "Source": "/var/lib/docker/volumes/newest-cni-020433/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-020433",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-020433",
	                "name.minikube.sigs.k8s.io": "newest-cni-020433",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "438c16e1cb1b9e9e11114163fb0fe0ad1f30abfacaac38b8cc9ad54795245249",
	            "SandboxKey": "/var/run/docker/netns/438c16e1cb1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-020433": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "aef7b8e187de0f8bf6cc69caec08dbd4417b8aa19d6d09df2b42cb2151e49057",
	                    "EndpointID": "b15751090cc1f4d20c87dc9e7120ed66e225368b278f96817e22ab109a0f4331",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "92:96:d2:bb:cd:03",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-020433",
	                        "a9ac1a439ce6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-020433 -n newest-cni-020433
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-020433 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-020433 logs -n 25: (1.098444801s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-680646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-680646 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p no-preload-897274 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-160987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-632243 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-160987 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ stop    │ -p default-k8s-diff-port-632243 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-160987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-632243 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ old-k8s-version-680646 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-680646 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ no-preload-897274 image list --format=json                                                                                                                                                                                                    │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-897274 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-020433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:18 UTC │
	│ delete  │ -p no-preload-897274                                                                                                                                                                                                                          │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ delete  │ -p no-preload-897274                                                                                                                                                                                                                          │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ default-k8s-diff-port-632243 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p default-k8s-diff-port-632243 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-020433 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ image   │ embed-certs-160987 image list --format=json                                                                                                                                                                                                   │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ pause   │ -p embed-certs-160987 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:17:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:17:32.750525  343912 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:17:32.750831  343912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:32.750854  343912 out.go:374] Setting ErrFile to fd 2...
	I1129 09:17:32.750859  343912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:32.751040  343912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:17:32.751569  343912 out.go:368] Setting JSON to false
	I1129 09:17:32.753086  343912 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3605,"bootTime":1764404248,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:17:32.753155  343912 start.go:143] virtualization: kvm guest
	I1129 09:17:32.755163  343912 out.go:179] * [newest-cni-020433] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:17:32.756656  343912 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:17:32.756692  343912 notify.go:221] Checking for updates...
	I1129 09:17:32.759425  343912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:17:32.760722  343912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:32.765362  343912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:17:32.766699  343912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:17:32.768011  343912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:17:32.769812  343912 config.go:182] Loaded profile config "default-k8s-diff-port-632243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.769952  343912 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.770081  343912 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.770208  343912 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:17:32.794655  343912 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:17:32.794775  343912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:32.856269  343912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 09:17:32.845151576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:32.856388  343912 docker.go:319] overlay module found
	I1129 09:17:32.858258  343912 out.go:179] * Using the docker driver based on user configuration
	I1129 09:17:32.859415  343912 start.go:309] selected driver: docker
	I1129 09:17:32.859434  343912 start.go:927] validating driver "docker" against <nil>
	I1129 09:17:32.859451  343912 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:17:32.860352  343912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:32.930751  343912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 09:17:32.91839311 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:32.930951  343912 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1129 09:17:32.930985  343912 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1129 09:17:32.931224  343912 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:17:32.933425  343912 out.go:179] * Using Docker driver with root privileges
	I1129 09:17:32.934824  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:32.934925  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:32.934944  343912 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:17:32.935044  343912 start.go:353] cluster config:
	{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:32.936354  343912 out.go:179] * Starting "newest-cni-020433" primary control-plane node in "newest-cni-020433" cluster
	I1129 09:17:32.937514  343912 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:17:32.938803  343912 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:17:32.940016  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:32.940051  343912 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:17:32.940062  343912 cache.go:65] Caching tarball of preloaded images
	I1129 09:17:32.940107  343912 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:17:32.940163  343912 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:17:32.940176  343912 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:17:32.940278  343912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:17:32.940301  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json: {Name:mk7d4da653b0e884b27837053cd3d354c3ff76e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:32.963727  343912 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:17:32.963754  343912 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:17:32.963777  343912 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:17:32.963830  343912 start.go:360] acquireMachinesLock for newest-cni-020433: {Name:mk6347901682a01c9d317c6a402722ce1e16792e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:17:32.963998  343912 start.go:364] duration metric: took 95.455µs to acquireMachinesLock for "newest-cni-020433"
	I1129 09:17:32.964029  343912 start.go:93] Provisioning new machine with config: &{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:17:32.964128  343912 start.go:125] createHost starting for "" (driver="docker")
	W1129 09:17:33.828970  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:35.829789  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:33.948277  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:36.448064  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	I1129 09:17:32.965989  343912 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:17:32.966316  343912 start.go:159] libmachine.API.Create for "newest-cni-020433" (driver="docker")
	I1129 09:17:32.966356  343912 client.go:173] LocalClient.Create starting
	I1129 09:17:32.966470  343912 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem
	I1129 09:17:32.966524  343912 main.go:143] libmachine: Decoding PEM data...
	I1129 09:17:32.966555  343912 main.go:143] libmachine: Parsing certificate...
	I1129 09:17:32.966626  343912 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem
	I1129 09:17:32.966654  343912 main.go:143] libmachine: Decoding PEM data...
	I1129 09:17:32.966670  343912 main.go:143] libmachine: Parsing certificate...
	I1129 09:17:32.967123  343912 cli_runner.go:164] Run: docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:17:32.987734  343912 cli_runner.go:211] docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:17:32.987872  343912 network_create.go:284] running [docker network inspect newest-cni-020433] to gather additional debugging logs...
	I1129 09:17:32.987905  343912 cli_runner.go:164] Run: docker network inspect newest-cni-020433
	W1129 09:17:33.007164  343912 cli_runner.go:211] docker network inspect newest-cni-020433 returned with exit code 1
	I1129 09:17:33.007194  343912 network_create.go:287] error running [docker network inspect newest-cni-020433]: docker network inspect newest-cni-020433: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-020433 not found
	I1129 09:17:33.007209  343912 network_create.go:289] output of [docker network inspect newest-cni-020433]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-020433 not found
	
	** /stderr **
	I1129 09:17:33.007343  343912 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:17:33.027663  343912 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-94fc752bc7a7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ed:43:e0:ad:5a} reservation:<nil>}
	I1129 09:17:33.028420  343912 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4cfc302f5d5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:73:ac:ba:18:bb} reservation:<nil>}
	I1129 09:17:33.029339  343912 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-05a73bbe16b8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:a9:af:00:78:ac} reservation:<nil>}
	I1129 09:17:33.030217  343912 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb6cd0}
	I1129 09:17:33.030243  343912 network_create.go:124] attempt to create docker network newest-cni-020433 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1129 09:17:33.030303  343912 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-020433 newest-cni-020433
	I1129 09:17:33.088543  343912 network_create.go:108] docker network newest-cni-020433 192.168.76.0/24 created
	I1129 09:17:33.088582  343912 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-020433" container
	I1129 09:17:33.088651  343912 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:17:33.110031  343912 cli_runner.go:164] Run: docker volume create newest-cni-020433 --label name.minikube.sigs.k8s.io=newest-cni-020433 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:17:33.131986  343912 oci.go:103] Successfully created a docker volume newest-cni-020433
	I1129 09:17:33.132086  343912 cli_runner.go:164] Run: docker run --rm --name newest-cni-020433-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-020433 --entrypoint /usr/bin/test -v newest-cni-020433:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:17:33.542784  343912 oci.go:107] Successfully prepared a docker volume newest-cni-020433
	I1129 09:17:33.542890  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:33.542904  343912 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:17:33.542963  343912 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-020433:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1129 09:17:38.328506  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:40.827427  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:38.452229  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:40.947913  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	I1129 09:17:38.398985  343912 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-020433:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.855972089s)
	I1129 09:17:38.399017  343912 kic.go:203] duration metric: took 4.856111068s to extract preloaded images to volume ...
	W1129 09:17:38.399145  343912 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:17:38.399190  343912 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:17:38.399238  343912 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:17:38.467132  343912 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-020433 --name newest-cni-020433 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-020433 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-020433 --network newest-cni-020433 --ip 192.168.76.2 --volume newest-cni-020433:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:17:39.064807  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Running}}
	I1129 09:17:39.085951  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:17:39.108652  343912 cli_runner.go:164] Run: docker exec newest-cni-020433 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:17:39.159933  343912 oci.go:144] the created container "newest-cni-020433" has a running status.
	I1129 09:17:39.159970  343912 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa...
	I1129 09:17:39.228797  343912 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:17:39.262675  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:17:39.285576  343912 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:17:39.285600  343912 kic_runner.go:114] Args: [docker exec --privileged newest-cni-020433 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:17:39.349410  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:17:39.369689  343912 machine.go:94] provisionDockerMachine start ...
	I1129 09:17:39.369803  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:39.396522  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:39.396932  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:39.396965  343912 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:17:39.397982  343912 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 09:17:42.550448  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-020433
	
	I1129 09:17:42.550474  343912 ubuntu.go:182] provisioning hostname "newest-cni-020433"
	I1129 09:17:42.550527  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:42.572133  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:42.572440  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:42.572461  343912 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-020433 && echo "newest-cni-020433" | sudo tee /etc/hostname
	I1129 09:17:42.733805  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-020433
	
	I1129 09:17:42.733897  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:42.754783  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:42.755144  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:42.755173  343912 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-020433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-020433/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-020433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:17:42.901064  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:17:42.901098  343912 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:17:42.901148  343912 ubuntu.go:190] setting up certificates
	I1129 09:17:42.901161  343912 provision.go:84] configureAuth start
	I1129 09:17:42.901231  343912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:17:42.921161  343912 provision.go:143] copyHostCerts
	I1129 09:17:42.921240  343912 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:17:42.921253  343912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:17:42.921344  343912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:17:42.921497  343912 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:17:42.921509  343912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:17:42.921568  343912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:17:42.921658  343912 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:17:42.921666  343912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:17:42.921693  343912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:17:42.921761  343912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.newest-cni-020433 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-020433]
	I1129 09:17:43.032466  343912 provision.go:177] copyRemoteCerts
	I1129 09:17:43.032525  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:17:43.032558  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.052823  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.158233  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:17:43.179138  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:17:43.198311  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:17:43.217652  343912 provision.go:87] duration metric: took 316.475572ms to configureAuth
	I1129 09:17:43.217682  343912 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:17:43.217917  343912 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:43.218034  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.237980  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:43.238211  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:43.238225  343912 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:17:43.535016  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:17:43.535041  343912 machine.go:97] duration metric: took 4.165320057s to provisionDockerMachine
	I1129 09:17:43.535052  343912 client.go:176] duration metric: took 10.568687757s to LocalClient.Create
	I1129 09:17:43.535073  343912 start.go:167] duration metric: took 10.568756916s to libmachine.API.Create "newest-cni-020433"
	I1129 09:17:43.535083  343912 start.go:293] postStartSetup for "newest-cni-020433" (driver="docker")
	I1129 09:17:43.535095  343912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:17:43.535160  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:17:43.535203  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.554574  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.661234  343912 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:17:43.665051  343912 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:17:43.665086  343912 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:17:43.665114  343912 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:17:43.665186  343912 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:17:43.665301  343912 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:17:43.665409  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:17:43.674165  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:43.696383  343912 start.go:296] duration metric: took 161.286243ms for postStartSetup
	I1129 09:17:43.696751  343912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:17:43.716301  343912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:17:43.716589  343912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:17:43.716640  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.735518  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.835307  343912 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:17:43.840211  343912 start.go:128] duration metric: took 10.876067654s to createHost
	I1129 09:17:43.840237  343912 start.go:83] releasing machines lock for "newest-cni-020433", held for 10.876224942s
	I1129 09:17:43.840309  343912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:17:43.860942  343912 ssh_runner.go:195] Run: cat /version.json
	I1129 09:17:43.860995  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.861019  343912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:17:43.861110  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.881396  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.881825  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:44.035348  343912 ssh_runner.go:195] Run: systemctl --version
	I1129 09:17:44.042398  343912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:17:44.079667  343912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:17:44.084668  343912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:17:44.084747  343912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:17:44.112611  343912 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:17:44.112638  343912 start.go:496] detecting cgroup driver to use...
	I1129 09:17:44.112675  343912 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:17:44.112721  343912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:17:44.130191  343912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:17:44.143333  343912 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:17:44.143407  343912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:17:44.160522  343912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:17:44.179005  343912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:17:44.264507  343912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:17:44.361596  343912 docker.go:234] disabling docker service ...
	I1129 09:17:44.361665  343912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:17:44.385098  343912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:17:44.399261  343912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:17:44.490353  343912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:17:44.577339  343912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:17:44.590606  343912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:17:44.606040  343912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:17:44.606113  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.617850  343912 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:17:44.617930  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.627795  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.637388  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.647881  343912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:17:44.657593  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.667667  343912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.683312  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.693180  343912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:17:44.701299  343912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:17:44.709519  343912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:44.789707  343912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:17:44.946719  343912 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:17:44.946786  343912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:17:44.950988  343912 start.go:564] Will wait 60s for crictl version
	I1129 09:17:44.951061  343912 ssh_runner.go:195] Run: which crictl
	I1129 09:17:44.954897  343912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:17:44.981273  343912 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:17:44.981355  343912 ssh_runner.go:195] Run: crio --version
	I1129 09:17:45.010241  343912 ssh_runner.go:195] Run: crio --version
	I1129 09:17:45.041932  343912 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:17:45.043598  343912 cli_runner.go:164] Run: docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:17:45.064493  343912 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 09:17:45.068916  343912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:45.081636  343912 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1129 09:17:43.447332  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	I1129 09:17:44.449613  336858 pod_ready.go:94] pod "coredns-66bc5c9577-z4m7c" is "Ready"
	I1129 09:17:44.449647  336858 pod_ready.go:86] duration metric: took 31.007906695s for pod "coredns-66bc5c9577-z4m7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.452244  336858 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.456751  336858 pod_ready.go:94] pod "etcd-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:44.456779  336858 pod_ready.go:86] duration metric: took 4.509231ms for pod "etcd-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.458972  336858 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.464014  336858 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:44.464045  336858 pod_ready.go:86] duration metric: took 5.045626ms for pod "kube-apiserver-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.466444  336858 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.645988  336858 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:44.646021  336858 pod_ready.go:86] duration metric: took 179.551463ms for pod "kube-controller-manager-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.845460  336858 pod_ready.go:83] waiting for pod "kube-proxy-p2nf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.245518  336858 pod_ready.go:94] pod "kube-proxy-p2nf7" is "Ready"
	I1129 09:17:45.245548  336858 pod_ready.go:86] duration metric: took 400.053767ms for pod "kube-proxy-p2nf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.445969  336858 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.847024  336858 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:45.847054  336858 pod_ready.go:86] duration metric: took 401.056115ms for pod "kube-scheduler-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.847067  336858 pod_ready.go:40] duration metric: took 32.409409019s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:45.894722  336858 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:17:45.896514  336858 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-632243" cluster and "default" namespace by default
	W1129 09:17:42.828310  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:44.828378  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	I1129 09:17:45.082734  343912 kubeadm.go:884] updating cluster {Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:17:45.082902  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:45.082966  343912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:45.116711  343912 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:45.116737  343912 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:17:45.116794  343912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:45.143455  343912 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:45.143477  343912 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:17:45.143484  343912 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 09:17:45.143562  343912 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-020433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:17:45.143624  343912 ssh_runner.go:195] Run: crio config
	I1129 09:17:45.191199  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:45.191226  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:45.191244  343912 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1129 09:17:45.191264  343912 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-020433 NodeName:newest-cni-020433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:17:45.191372  343912 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-020433"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:17:45.191438  343912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:17:45.199969  343912 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:17:45.200043  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:17:45.208777  343912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 09:17:45.222978  343912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:17:45.238915  343912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1129 09:17:45.253505  343912 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:17:45.257546  343912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:45.269034  343912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:45.354518  343912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:17:45.382355  343912 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433 for IP: 192.168.76.2
	I1129 09:17:45.382379  343912 certs.go:195] generating shared ca certs ...
	I1129 09:17:45.382407  343912 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.382577  343912 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:17:45.382636  343912 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:17:45.382650  343912 certs.go:257] generating profile certs ...
	I1129 09:17:45.382718  343912 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key
	I1129 09:17:45.382739  343912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.crt with IP's: []
	I1129 09:17:45.531926  343912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.crt ...
	I1129 09:17:45.531957  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.crt: {Name:mkeb17feaf8ba6750a01bd0a1f0441d4154bc65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.532140  343912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key ...
	I1129 09:17:45.532151  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key: {Name:mke1454a7dc3fbfdd29bdb836050690bcbb7394e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.532230  343912 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70
	I1129 09:17:45.532247  343912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1129 09:17:45.624876  343912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70 ...
	I1129 09:17:45.624908  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70: {Name:mk7ef25787741e084b6a866e43c94e1e8fef637a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.625077  343912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70 ...
	I1129 09:17:45.625090  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70: {Name:mk1ecd69640eeb4a11bb5f1e1ff7ab99459cb558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.625222  343912 certs.go:382] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70 -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt
	I1129 09:17:45.625303  343912 certs.go:386] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70 -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key
	I1129 09:17:45.625381  343912 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key
	I1129 09:17:45.625401  343912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt with IP's: []
	I1129 09:17:45.648826  343912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt ...
	I1129 09:17:45.648864  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt: {Name:mk66c6222d92d3d2bb033717f49fc6858d0a9367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.649040  343912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key ...
	I1129 09:17:45.649052  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key: {Name:mk559719a3cba034552025e578cadb28054704f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.649223  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:17:45.649259  343912 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:17:45.649269  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:17:45.649291  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:17:45.649314  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:17:45.649337  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:17:45.649376  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:45.649920  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:17:45.669435  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:17:45.688777  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:17:45.707612  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:17:45.726954  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:17:45.745570  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 09:17:45.763773  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:17:45.781717  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:17:45.799936  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:17:45.820108  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:17:45.839214  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:17:45.859643  343912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:17:45.874007  343912 ssh_runner.go:195] Run: openssl version
	I1129 09:17:45.880775  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:17:45.890438  343912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:17:45.894494  343912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:17:45.894554  343912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:17:45.934499  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:17:45.944013  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:17:45.953676  343912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:17:45.957999  343912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:17:45.958047  343912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:17:45.998219  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:17:46.008105  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:17:46.018512  343912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:46.022778  343912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:46.022855  343912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:46.060278  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:17:46.069685  343912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:17:46.073627  343912 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:17:46.073677  343912 kubeadm.go:401] StartCluster: {Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:46.073751  343912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:17:46.073796  343912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:17:46.102729  343912 cri.go:89] found id: ""
	I1129 09:17:46.102806  343912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:17:46.111499  343912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:17:46.120045  343912 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:17:46.120110  343912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:17:46.128326  343912 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:17:46.128366  343912 kubeadm.go:158] found existing configuration files:
	
	I1129 09:17:46.128413  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:17:46.136677  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:17:46.136741  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:17:46.144727  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:17:46.152908  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:17:46.152971  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:17:46.161300  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:17:46.170050  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:17:46.170117  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:17:46.179094  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:17:46.190258  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:17:46.190325  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:17:46.200333  343912 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:17:46.284775  343912 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 09:17:46.350549  343912 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1129 09:17:47.327775  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:49.327943  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	I1129 09:17:49.827724  336547 pod_ready.go:94] pod "coredns-66bc5c9577-ptx67" is "Ready"
	I1129 09:17:49.827757  336547 pod_ready.go:86] duration metric: took 36.505830154s for pod "coredns-66bc5c9577-ptx67" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.830193  336547 pod_ready.go:83] waiting for pod "etcd-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.834087  336547 pod_ready.go:94] pod "etcd-embed-certs-160987" is "Ready"
	I1129 09:17:49.834117  336547 pod_ready.go:86] duration metric: took 3.892584ms for pod "etcd-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.836236  336547 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.840124  336547 pod_ready.go:94] pod "kube-apiserver-embed-certs-160987" is "Ready"
	I1129 09:17:49.840148  336547 pod_ready.go:86] duration metric: took 3.889352ms for pod "kube-apiserver-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.842042  336547 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.026423  336547 pod_ready.go:94] pod "kube-controller-manager-embed-certs-160987" is "Ready"
	I1129 09:17:50.026453  336547 pod_ready.go:86] duration metric: took 184.390727ms for pod "kube-controller-manager-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.225618  336547 pod_ready.go:83] waiting for pod "kube-proxy-57l9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.626123  336547 pod_ready.go:94] pod "kube-proxy-57l9h" is "Ready"
	I1129 09:17:50.626149  336547 pod_ready.go:86] duration metric: took 400.500945ms for pod "kube-proxy-57l9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.826449  336547 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:51.226295  336547 pod_ready.go:94] pod "kube-scheduler-embed-certs-160987" is "Ready"
	I1129 09:17:51.226329  336547 pod_ready.go:86] duration metric: took 399.854281ms for pod "kube-scheduler-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:51.226346  336547 pod_ready.go:40] duration metric: took 37.909395781s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:51.285055  336547 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:17:51.286778  336547 out.go:179] * Done! kubectl is now configured to use "embed-certs-160987" cluster and "default" namespace by default
	I1129 09:17:56.491067  343912 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:17:56.491128  343912 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:17:56.491204  343912 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:17:56.491252  343912 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:17:56.491321  343912 kubeadm.go:319] OS: Linux
	I1129 09:17:56.491400  343912 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:17:56.491441  343912 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:17:56.491502  343912 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:17:56.491558  343912 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:17:56.491602  343912 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:17:56.491642  343912 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:17:56.491683  343912 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:17:56.491733  343912 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:17:56.491834  343912 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:17:56.491984  343912 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:17:56.492110  343912 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:17:56.492184  343912 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:17:56.493947  343912 out.go:252]   - Generating certificates and keys ...
	I1129 09:17:56.494037  343912 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:17:56.494134  343912 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:17:56.494235  343912 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:17:56.494315  343912 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:17:56.494392  343912 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:17:56.494466  343912 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:17:56.494546  343912 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:17:56.494718  343912 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-020433] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:17:56.494781  343912 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:17:56.494923  343912 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-020433] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:17:56.495006  343912 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:17:56.495078  343912 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:17:56.495157  343912 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:17:56.495234  343912 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:17:56.495280  343912 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:17:56.495370  343912 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:17:56.495457  343912 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:17:56.495570  343912 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:17:56.495624  343912 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:17:56.495696  343912 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:17:56.495760  343912 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:17:56.497322  343912 out.go:252]   - Booting up control plane ...
	I1129 09:17:56.497460  343912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:17:56.497563  343912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:17:56.497652  343912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:17:56.497741  343912 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:17:56.497818  343912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:17:56.497976  343912 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:17:56.498111  343912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:17:56.498169  343912 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:17:56.498335  343912 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:17:56.498461  343912 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:17:56.498530  343912 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.935954ms
	I1129 09:17:56.498616  343912 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:17:56.498731  343912 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1129 09:17:56.498879  343912 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:17:56.498988  343912 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 09:17:56.499073  343912 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.504475511s
	I1129 09:17:56.499172  343912 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.695464789s
	I1129 09:17:56.499266  343912 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501879872s
	I1129 09:17:56.499440  343912 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:17:56.499624  343912 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:17:56.499691  343912 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:17:56.500020  343912 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-020433 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:17:56.500135  343912 kubeadm.go:319] [bootstrap-token] Using token: f82gs2.l4bciq1r030lvxp0
	I1129 09:17:56.501325  343912 out.go:252]   - Configuring RBAC rules ...
	I1129 09:17:56.501453  343912 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:17:56.501553  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:17:56.501684  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:17:56.501866  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:17:56.502025  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:17:56.502108  343912 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:17:56.502227  343912 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:17:56.502273  343912 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:17:56.502315  343912 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:17:56.502321  343912 kubeadm.go:319] 
	I1129 09:17:56.502376  343912 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:17:56.502381  343912 kubeadm.go:319] 
	I1129 09:17:56.502451  343912 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:17:56.502460  343912 kubeadm.go:319] 
	I1129 09:17:56.502481  343912 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:17:56.502532  343912 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:17:56.502576  343912 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:17:56.502586  343912 kubeadm.go:319] 
	I1129 09:17:56.502629  343912 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:17:56.502639  343912 kubeadm.go:319] 
	I1129 09:17:56.502689  343912 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:17:56.502697  343912 kubeadm.go:319] 
	I1129 09:17:56.502745  343912 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:17:56.502810  343912 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:17:56.502890  343912 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:17:56.502897  343912 kubeadm.go:319] 
	I1129 09:17:56.502971  343912 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:17:56.503057  343912 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:17:56.503064  343912 kubeadm.go:319] 
	I1129 09:17:56.503140  343912 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f82gs2.l4bciq1r030lvxp0 \
	I1129 09:17:56.503224  343912 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 \
	I1129 09:17:56.503244  343912 kubeadm.go:319] 	--control-plane 
	I1129 09:17:56.503252  343912 kubeadm.go:319] 
	I1129 09:17:56.503335  343912 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:17:56.503344  343912 kubeadm.go:319] 
	I1129 09:17:56.503417  343912 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f82gs2.l4bciq1r030lvxp0 \
	I1129 09:17:56.503523  343912 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 
	I1129 09:17:56.503547  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:56.503557  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:56.504793  343912 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:17:56.505922  343912 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:17:56.510364  343912 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 09:17:56.510383  343912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:17:56.523891  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:17:56.771723  343912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:17:56.771759  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:56.771857  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-020433 minikube.k8s.io/updated_at=2025_11_29T09_17_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=newest-cni-020433 minikube.k8s.io/primary=true
	I1129 09:17:56.870386  343912 ops.go:34] apiserver oom_adj: -16
	I1129 09:17:56.870493  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:57.370894  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:57.870685  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:58.370644  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:58.870909  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:59.371245  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:59.871577  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:00.370624  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:00.871043  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:01.370798  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:01.470229  343912 kubeadm.go:1114] duration metric: took 4.69851702s to wait for elevateKubeSystemPrivileges
	I1129 09:18:01.470353  343912 kubeadm.go:403] duration metric: took 15.396675728s to StartCluster
	I1129 09:18:01.470403  343912 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:01.470526  343912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:18:01.473161  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:01.473501  343912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:18:01.473529  343912 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:18:01.473595  343912 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-020433"
	I1129 09:18:01.473611  343912 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-020433"
	I1129 09:18:01.473639  343912 host.go:66] Checking if "newest-cni-020433" exists ...
	I1129 09:18:01.473786  343912 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:01.473872  343912 addons.go:70] Setting default-storageclass=true in profile "newest-cni-020433"
	I1129 09:18:01.473890  343912 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-020433"
	I1129 09:18:01.473505  343912 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:18:01.474234  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:01.474263  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:01.477129  343912 out.go:179] * Verifying Kubernetes components...
	I1129 09:18:01.478510  343912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:18:01.508488  343912 addons.go:239] Setting addon default-storageclass=true in "newest-cni-020433"
	I1129 09:18:01.508544  343912 host.go:66] Checking if "newest-cni-020433" exists ...
	I1129 09:18:01.509017  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:01.512765  343912 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:18:01.513878  343912 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:18:01.513901  343912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:18:01.513969  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:01.548536  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:01.549743  343912 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:18:01.549766  343912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:18:01.549824  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:01.577630  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:01.603306  343912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:18:01.652699  343912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:18:01.679084  343912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:18:01.710552  343912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:18:01.806299  343912 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1129 09:18:01.808103  343912 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:18:01.808185  343912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:18:02.050418  343912 api_server.go:72] duration metric: took 576.481112ms to wait for apiserver process to appear ...
	I1129 09:18:02.050443  343912 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:18:02.050462  343912 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:18:02.057555  343912 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:18:02.058665  343912 api_server.go:141] control plane version: v1.34.1
	I1129 09:18:02.058689  343912 api_server.go:131] duration metric: took 8.238938ms to wait for apiserver health ...
	I1129 09:18:02.058698  343912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:18:02.059528  343912 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 09:18:02.062186  343912 addons.go:530] duration metric: took 588.650166ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:18:02.062399  343912 system_pods.go:59] 8 kube-system pods found
	I1129 09:18:02.062440  343912 system_pods.go:61] "coredns-66bc5c9577-h8nqv" [c8cbc934-0df3-44c5-a3d7-fff7ca54ef86] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 09:18:02.062454  343912 system_pods.go:61] "etcd-newest-cni-020433" [47991984-6243-463b-9cda-95d0e18b6092] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:18:02.062465  343912 system_pods.go:61] "kindnet-gxgwn" [7e13d750-7bcf-4e2a-9663-512ecc23781a] Running
	I1129 09:18:02.062474  343912 system_pods.go:61] "kube-apiserver-newest-cni-020433" [20641eff-ff31-4e31-8983-1075116bcdd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:18:02.062488  343912 system_pods.go:61] "kube-controller-manager-newest-cni-020433" [f5bece62-e41a-4cf6-bacc-29d4dd0754cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:18:02.062494  343912 system_pods.go:61] "kube-proxy-nqwzp" [118d6bdc-5c33-4ab5-bee8-6f8a3447c461] Running
	I1129 09:18:02.062507  343912 system_pods.go:61] "kube-scheduler-newest-cni-020433" [3224b587-95a1-4963-88ae-af38a3bd1d84] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:18:02.062523  343912 system_pods.go:61] "storage-provisioner" [30a16c03-a054-435c-8eec-ce64486eb6c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 09:18:02.062532  343912 system_pods.go:74] duration metric: took 3.827683ms to wait for pod list to return data ...
	I1129 09:18:02.062545  343912 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:18:02.065169  343912 default_sa.go:45] found service account: "default"
	I1129 09:18:02.065192  343912 default_sa.go:55] duration metric: took 2.641298ms for default service account to be created ...
	I1129 09:18:02.065203  343912 kubeadm.go:587] duration metric: took 591.270549ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:18:02.065217  343912 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:18:02.068226  343912 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:18:02.068253  343912 node_conditions.go:123] node cpu capacity is 8
	I1129 09:18:02.068266  343912 node_conditions.go:105] duration metric: took 3.045433ms to run NodePressure ...
	I1129 09:18:02.068278  343912 start.go:242] waiting for startup goroutines ...
	I1129 09:18:02.310908  343912 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-020433" context rescaled to 1 replicas
	I1129 09:18:02.310945  343912 start.go:247] waiting for cluster config update ...
	I1129 09:18:02.310956  343912 start.go:256] writing updated cluster config ...
	I1129 09:18:02.311264  343912 ssh_runner.go:195] Run: rm -f paused
	I1129 09:18:02.364750  343912 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:18:02.367739  343912 out.go:179] * Done! kubectl is now configured to use "newest-cni-020433" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.3828973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.38613454Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9c122490-8f49-4c1c-8626-749e12542b0f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.388315117Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.389271439Z" level=info msg="Ran pod sandbox 60031598ee719b3654acc07d029c0d0478158a03507276464768784f5d628d8e with infra container: kube-system/kube-proxy-nqwzp/POD" id=9c122490-8f49-4c1c-8626-749e12542b0f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.391070638Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=0c6b5cb6-6252-4876-a389-4d912cf81dc4 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.391712292Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7a3bd3c0-3df6-4394-a345-fc9d61399f4e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.392698779Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d87b19f3-99b7-4160-8465-81f58169bccd name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.393706634Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.394692899Z" level=info msg="Ran pod sandbox becf6e87464bbcf894aa7ab32fed6d7d64878439c08d1d0c2fea6d64be1bfe03 with infra container: kube-system/kindnet-gxgwn/POD" id=7a3bd3c0-3df6-4394-a345-fc9d61399f4e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.396064608Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7e553f6b-7b50-4fa1-b0c4-ba66f8499152 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.397165766Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f249b0b7-be03-449c-a854-5da0804ba991 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.397337636Z" level=info msg="Creating container: kube-system/kube-proxy-nqwzp/kube-proxy" id=ec734b6d-7139-46b3-b3d9-f89b9a5eb467 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.397475991Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.400986022Z" level=info msg="Creating container: kube-system/kindnet-gxgwn/kindnet-cni" id=d23b5ad1-d5d9-4536-9f5c-89e8c0ed984f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.40112965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.404893956Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.40554913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.406930527Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.40749347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.440418656Z" level=info msg="Created container 6679ca25aaa96fd7b53bbdd7bf1235bac18b0094e1f8739d0e68f6051cdb3a25: kube-system/kindnet-gxgwn/kindnet-cni" id=d23b5ad1-d5d9-4536-9f5c-89e8c0ed984f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.441277174Z" level=info msg="Starting container: 6679ca25aaa96fd7b53bbdd7bf1235bac18b0094e1f8739d0e68f6051cdb3a25" id=f7f13442-fa75-4740-b551-7995cb43448f name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.444161647Z" level=info msg="Started container" PID=1506 containerID=6679ca25aaa96fd7b53bbdd7bf1235bac18b0094e1f8739d0e68f6051cdb3a25 description=kube-system/kindnet-gxgwn/kindnet-cni id=f7f13442-fa75-4740-b551-7995cb43448f name=/runtime.v1.RuntimeService/StartContainer sandboxID=becf6e87464bbcf894aa7ab32fed6d7d64878439c08d1d0c2fea6d64be1bfe03
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.446306284Z" level=info msg="Created container f93d0f43a50164132c0a4572580d14f2c3ffbeb35e1426f782d573e0a1d124e9: kube-system/kube-proxy-nqwzp/kube-proxy" id=ec734b6d-7139-46b3-b3d9-f89b9a5eb467 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.44772804Z" level=info msg="Starting container: f93d0f43a50164132c0a4572580d14f2c3ffbeb35e1426f782d573e0a1d124e9" id=8c54b205-b0a8-4947-b603-818a382bbd08 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:18:01 newest-cni-020433 crio[784]: time="2025-11-29T09:18:01.451551101Z" level=info msg="Started container" PID=1505 containerID=f93d0f43a50164132c0a4572580d14f2c3ffbeb35e1426f782d573e0a1d124e9 description=kube-system/kube-proxy-nqwzp/kube-proxy id=8c54b205-b0a8-4947-b603-818a382bbd08 name=/runtime.v1.RuntimeService/StartContainer sandboxID=60031598ee719b3654acc07d029c0d0478158a03507276464768784f5d628d8e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	6679ca25aaa96       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   becf6e87464bb       kindnet-gxgwn                               kube-system
	f93d0f43a5016       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   2 seconds ago       Running             kube-proxy                0                   60031598ee719       kube-proxy-nqwzp                            kube-system
	3ffdc8654175d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   0574a91251425       etcd-newest-cni-020433                      kube-system
	9bfa12c4417e7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   9c52d99b000c6       kube-apiserver-newest-cni-020433            kube-system
	95d1d8d98ee37       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   4fd6cd8609284       kube-scheduler-newest-cni-020433            kube-system
	078d5570e928c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   bbc65fbaf8157       kube-controller-manager-newest-cni-020433   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-020433
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-020433
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=newest-cni-020433
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_17_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:17:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-020433
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:17:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:17:55 +0000   Sat, 29 Nov 2025 09:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:17:55 +0000   Sat, 29 Nov 2025 09:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:17:55 +0000   Sat, 29 Nov 2025 09:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 29 Nov 2025 09:17:55 +0000   Sat, 29 Nov 2025 09:17:51 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-020433
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                9e13478b-5cce-4854-b5e2-d069a5e427ce
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-020433                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-gxgwn                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-020433             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-020433    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-nqwzp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-020433             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  Starting                 13s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-020433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-020433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-020433 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-020433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-020433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s                 kubelet          Node newest-cni-020433 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-020433 event: Registered Node newest-cni-020433 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 e1 87 e4 84 ae 08 06
	[  +0.002640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[ +14.778105] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +13.520989] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 32 aa eb 52 77 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +14.762906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df ae 93 da f6 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[Nov29 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[ +13.297626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	[  +7.390906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 68 f3 3d 3c e9 08 06
	[  +0.000436] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[  +7.145922] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea c3 2f 2a a3 01 08 06
	[  +0.000415] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	
	
	==> etcd [3ffdc8654175d129987a455ab1e818970efa75589b352c1767d2fbe03af38975] <==
	{"level":"warn","ts":"2025-11-29T09:17:52.596166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.602832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.611372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.618200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.625316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.631765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.638238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.644629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.651719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.659070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.672110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.679733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.686368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.693274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.699894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.706592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.713719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.727088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.733693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.741278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.747819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.766028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.772674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.779325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:52.831245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59460","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:03 up  1:00,  0 user,  load average: 3.06, 3.68, 2.49
	Linux newest-cni-020433 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6679ca25aaa96fd7b53bbdd7bf1235bac18b0094e1f8739d0e68f6051cdb3a25] <==
	I1129 09:18:01.703453       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:18:01.704017       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 09:18:01.704177       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:18:01.704198       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:18:01.704224       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:18:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:18:02.000198       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:18:02.001418       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:18:02.001445       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:18:02.001882       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:18:02.397471       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:18:02.397542       1 metrics.go:72] Registering metrics
	I1129 09:18:02.397630       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [9bfa12c4417e7b4a5e5a64b9f7eace04940df2f7d23bee7310580be2c9cf4500] <==
	I1129 09:17:53.311741       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:17:53.314209       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:17:53.314254       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:17:53.316090       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 09:17:53.316300       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 09:17:53.321870       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:17:53.322107       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:17:53.336560       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:17:54.215703       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:17:54.219892       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:17:54.219916       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:17:54.767579       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:17:54.809162       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:17:54.920551       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:17:54.927105       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1129 09:17:54.928389       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:17:54.933810       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:17:55.246763       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:17:55.893015       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:17:55.902487       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:17:55.913540       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:18:00.951808       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:18:00.957312       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:18:01.050545       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1129 09:18:01.350284       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [078d5570e928c4ccaf6e7448782e0726391d50214d059af89a1ba0723ffb4479] <==
	I1129 09:18:00.209026       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:18:00.210099       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-020433" podCIDRs=["10.42.0.0/24"]
	I1129 09:18:00.234296       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 09:18:00.241518       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:18:00.244230       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 09:18:00.245417       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 09:18:00.246580       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 09:18:00.246606       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 09:18:00.246631       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:18:00.246636       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:18:00.246656       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:18:00.246685       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:18:00.246611       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 09:18:00.246708       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:18:00.246716       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 09:18:00.246684       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:18:00.247018       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:18:00.247996       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 09:18:00.248020       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 09:18:00.248047       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:18:00.248049       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 09:18:00.248072       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 09:18:00.250906       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:18:00.275140       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 09:18:00.275287       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f93d0f43a50164132c0a4572580d14f2c3ffbeb35e1426f782d573e0a1d124e9] <==
	I1129 09:18:01.510481       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:18:01.596396       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:18:01.696964       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:18:01.697012       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 09:18:01.697114       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:18:01.731025       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:18:01.731079       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:18:01.741121       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:18:01.741762       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:18:01.741786       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:18:01.743504       1 config.go:200] "Starting service config controller"
	I1129 09:18:01.743568       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:18:01.743638       1 config.go:309] "Starting node config controller"
	I1129 09:18:01.743660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:18:01.743683       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:18:01.743873       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:18:01.743962       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:18:01.744149       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:18:01.744214       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:18:01.845746       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:18:01.845827       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:18:01.846029       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [95d1d8d98ee372a6242a942b471520f5800fc3d2b99058ca90f65abb6759720b] <==
	E1129 09:17:53.272425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:17:53.272454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:17:53.272479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:17:53.272503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:17:53.272933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:17:53.273262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:17:53.273415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:17:53.273450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:17:53.273486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:17:54.095361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:17:54.126733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:17:54.153119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:17:54.216962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:17:54.266580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:17:54.336858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:17:54.338822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:17:54.345141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:17:54.368467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1129 09:17:54.369334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:17:54.454194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:17:54.463400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:17:54.465361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:17:54.481872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:17:54.503100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1129 09:17:57.266145       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:17:56 newest-cni-020433 kubelet[1311]: I1129 09:17:56.710396    1311 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 29 09:17:56 newest-cni-020433 kubelet[1311]: I1129 09:17:56.740572    1311 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-020433"
	Nov 29 09:17:56 newest-cni-020433 kubelet[1311]: I1129 09:17:56.740689    1311 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-020433"
	Nov 29 09:17:56 newest-cni-020433 kubelet[1311]: I1129 09:17:56.740768    1311 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-020433"
	Nov 29 09:17:56 newest-cni-020433 kubelet[1311]: I1129 09:17:56.741158    1311 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-020433"
	Nov 29 09:17:56 newest-cni-020433 kubelet[1311]: E1129 09:17:56.760004    1311 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-020433\" already exists" pod="kube-system/kube-scheduler-newest-cni-020433"
	Nov 29 09:17:56 newest-cni-020433 kubelet[1311]: E1129 09:17:56.760417    1311 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-020433\" already exists" pod="kube-system/etcd-newest-cni-020433"
	Nov 29 09:17:56 newest-cni-020433 kubelet[1311]: E1129 09:17:56.760585    1311 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-020433\" already exists" pod="kube-system/kube-apiserver-newest-cni-020433"
	Nov 29 09:17:56 newest-cni-020433 kubelet[1311]: E1129 09:17:56.760818    1311 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-020433\" already exists" pod="kube-system/kube-controller-manager-newest-cni-020433"
	Nov 29 09:17:56 newest-cni-020433 kubelet[1311]: I1129 09:17:56.779108    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-020433" podStartSLOduration=1.779085362 podStartE2EDuration="1.779085362s" podCreationTimestamp="2025-11-29 09:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:17:56.778982564 +0000 UTC m=+1.142656521" watchObservedRunningTime="2025-11-29 09:17:56.779085362 +0000 UTC m=+1.142759319"
	Nov 29 09:17:56 newest-cni-020433 kubelet[1311]: I1129 09:17:56.800712    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-020433" podStartSLOduration=1.800689718 podStartE2EDuration="1.800689718s" podCreationTimestamp="2025-11-29 09:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:17:56.790257573 +0000 UTC m=+1.153931533" watchObservedRunningTime="2025-11-29 09:17:56.800689718 +0000 UTC m=+1.164363676"
	Nov 29 09:17:56 newest-cni-020433 kubelet[1311]: I1129 09:17:56.801100    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-020433" podStartSLOduration=1.800944795 podStartE2EDuration="1.800944795s" podCreationTimestamp="2025-11-29 09:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:17:56.800949785 +0000 UTC m=+1.164623726" watchObservedRunningTime="2025-11-29 09:17:56.800944795 +0000 UTC m=+1.164618745"
	Nov 29 09:17:56 newest-cni-020433 kubelet[1311]: I1129 09:17:56.814866    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-020433" podStartSLOduration=2.8148282079999998 podStartE2EDuration="2.814828208s" podCreationTimestamp="2025-11-29 09:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:17:56.814752946 +0000 UTC m=+1.178426906" watchObservedRunningTime="2025-11-29 09:17:56.814828208 +0000 UTC m=+1.178502167"
	Nov 29 09:18:00 newest-cni-020433 kubelet[1311]: I1129 09:18:00.248446    1311 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 29 09:18:00 newest-cni-020433 kubelet[1311]: I1129 09:18:00.249241    1311 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 29 09:18:01 newest-cni-020433 kubelet[1311]: I1129 09:18:01.147327    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e13d750-7bcf-4e2a-9663-512ecc23781a-xtables-lock\") pod \"kindnet-gxgwn\" (UID: \"7e13d750-7bcf-4e2a-9663-512ecc23781a\") " pod="kube-system/kindnet-gxgwn"
	Nov 29 09:18:01 newest-cni-020433 kubelet[1311]: I1129 09:18:01.147393    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/118d6bdc-5c33-4ab5-bee8-6f8a3447c461-xtables-lock\") pod \"kube-proxy-nqwzp\" (UID: \"118d6bdc-5c33-4ab5-bee8-6f8a3447c461\") " pod="kube-system/kube-proxy-nqwzp"
	Nov 29 09:18:01 newest-cni-020433 kubelet[1311]: I1129 09:18:01.147419    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/118d6bdc-5c33-4ab5-bee8-6f8a3447c461-lib-modules\") pod \"kube-proxy-nqwzp\" (UID: \"118d6bdc-5c33-4ab5-bee8-6f8a3447c461\") " pod="kube-system/kube-proxy-nqwzp"
	Nov 29 09:18:01 newest-cni-020433 kubelet[1311]: I1129 09:18:01.147443    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7e13d750-7bcf-4e2a-9663-512ecc23781a-cni-cfg\") pod \"kindnet-gxgwn\" (UID: \"7e13d750-7bcf-4e2a-9663-512ecc23781a\") " pod="kube-system/kindnet-gxgwn"
	Nov 29 09:18:01 newest-cni-020433 kubelet[1311]: I1129 09:18:01.147460    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e13d750-7bcf-4e2a-9663-512ecc23781a-lib-modules\") pod \"kindnet-gxgwn\" (UID: \"7e13d750-7bcf-4e2a-9663-512ecc23781a\") " pod="kube-system/kindnet-gxgwn"
	Nov 29 09:18:01 newest-cni-020433 kubelet[1311]: I1129 09:18:01.147481    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8t84\" (UniqueName: \"kubernetes.io/projected/7e13d750-7bcf-4e2a-9663-512ecc23781a-kube-api-access-h8t84\") pod \"kindnet-gxgwn\" (UID: \"7e13d750-7bcf-4e2a-9663-512ecc23781a\") " pod="kube-system/kindnet-gxgwn"
	Nov 29 09:18:01 newest-cni-020433 kubelet[1311]: I1129 09:18:01.147569    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/118d6bdc-5c33-4ab5-bee8-6f8a3447c461-kube-proxy\") pod \"kube-proxy-nqwzp\" (UID: \"118d6bdc-5c33-4ab5-bee8-6f8a3447c461\") " pod="kube-system/kube-proxy-nqwzp"
	Nov 29 09:18:01 newest-cni-020433 kubelet[1311]: I1129 09:18:01.147625    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpph7\" (UniqueName: \"kubernetes.io/projected/118d6bdc-5c33-4ab5-bee8-6f8a3447c461-kube-api-access-rpph7\") pod \"kube-proxy-nqwzp\" (UID: \"118d6bdc-5c33-4ab5-bee8-6f8a3447c461\") " pod="kube-system/kube-proxy-nqwzp"
	Nov 29 09:18:01 newest-cni-020433 kubelet[1311]: I1129 09:18:01.786689    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nqwzp" podStartSLOduration=0.786664419 podStartE2EDuration="786.664419ms" podCreationTimestamp="2025-11-29 09:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:18:01.786488229 +0000 UTC m=+6.150162185" watchObservedRunningTime="2025-11-29 09:18:01.786664419 +0000 UTC m=+6.150338388"
	Nov 29 09:18:01 newest-cni-020433 kubelet[1311]: I1129 09:18:01.787379    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gxgwn" podStartSLOduration=0.787364845 podStartE2EDuration="787.364845ms" podCreationTimestamp="2025-11-29 09:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:18:01.771036148 +0000 UTC m=+6.134710115" watchObservedRunningTime="2025-11-29 09:18:01.787364845 +0000 UTC m=+6.151038803"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-020433 -n newest-cni-020433
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-020433 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-h8nqv storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-020433 describe pod coredns-66bc5c9577-h8nqv storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-020433 describe pod coredns-66bc5c9577-h8nqv storage-provisioner: exit status 1 (70.536597ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-h8nqv" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-020433 describe pod coredns-66bc5c9577-h8nqv storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-160987 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-160987 --alsologtostderr -v=1: exit status 80 (1.836776811s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-160987 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:18:03.114311  350917 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:18:03.114448  350917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:03.114463  350917 out.go:374] Setting ErrFile to fd 2...
	I1129 09:18:03.114469  350917 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:03.114768  350917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:18:03.115088  350917 out.go:368] Setting JSON to false
	I1129 09:18:03.115105  350917 mustload.go:66] Loading cluster: embed-certs-160987
	I1129 09:18:03.115608  350917 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:03.116139  350917 cli_runner.go:164] Run: docker container inspect embed-certs-160987 --format={{.State.Status}}
	I1129 09:18:03.142235  350917 host.go:66] Checking if "embed-certs-160987" exists ...
	I1129 09:18:03.142568  350917 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:18:03.215513  350917 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-29 09:18:03.203290755 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:18:03.216326  350917 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-160987 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1129 09:18:03.218033  350917 out.go:179] * Pausing node embed-certs-160987 ... 
	I1129 09:18:03.219616  350917 host.go:66] Checking if "embed-certs-160987" exists ...
	I1129 09:18:03.220005  350917 ssh_runner.go:195] Run: systemctl --version
	I1129 09:18:03.220075  350917 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-160987
	I1129 09:18:03.244751  350917 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/embed-certs-160987/id_rsa Username:docker}
	I1129 09:18:03.355577  350917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:18:03.388105  350917 pause.go:52] kubelet running: true
	I1129 09:18:03.388282  350917 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:18:03.588051  350917 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:18:03.588180  350917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:18:03.678000  350917 cri.go:89] found id: "029d78d32a7ea14234ad87926a9be889f2d496efc65239c68f4aed436287d272"
	I1129 09:18:03.678060  350917 cri.go:89] found id: "b1168c95c222d7ead90133fcf186480b098d18439b28dae77675b1df0317dc77"
	I1129 09:18:03.678067  350917 cri.go:89] found id: "00c1cfc1e5404b627c278cb3aa524243f84e0940022fa9856a40f2180118e3da"
	I1129 09:18:03.678072  350917 cri.go:89] found id: "86f9aa5168cf43f40605f3e7fc7ef07afa72f313e7f427fb772bbccde2c8feb9"
	I1129 09:18:03.678077  350917 cri.go:89] found id: "668d94450587744ba0fea9e8fca8a95da8eb1024372b15cdc4023f63e16b8f81"
	I1129 09:18:03.678087  350917 cri.go:89] found id: "b910bdb65bdedc5ad424106b6aea90fdb221e9c9e03ce5e62c16682d9c219dbf"
	I1129 09:18:03.678091  350917 cri.go:89] found id: "d40c5061382593cad885d4b3c86be7a3641ec567ffe3cb652cfd84dd0c2396bf"
	I1129 09:18:03.678096  350917 cri.go:89] found id: "6ee1a1cef6abf99fe2be4154d33fa7e55335140b3c9fc7c979eabca17e682341"
	I1129 09:18:03.678142  350917 cri.go:89] found id: "062c767d0f027b4b3689a35cad7c6003a28dac146ef6a6e9732382f36ec71ffa"
	I1129 09:18:03.678163  350917 cri.go:89] found id: "f2694b730cdae9713e45775e584e5c29751a7dc48494b00ad67c3002310bbbcb"
	I1129 09:18:03.678172  350917 cri.go:89] found id: "9466ff8c42bf12177809c91484f8627a7ea39bef17beb3f8f5f5fbc14b260a39"
	I1129 09:18:03.678176  350917 cri.go:89] found id: ""
	I1129 09:18:03.678249  350917 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:18:03.692409  350917 retry.go:31] will retry after 280.180746ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:03Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:18:03.972939  350917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:18:03.988298  350917 pause.go:52] kubelet running: false
	I1129 09:18:03.988376  350917 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:18:04.165703  350917 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:18:04.165809  350917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:18:04.250690  350917 cri.go:89] found id: "029d78d32a7ea14234ad87926a9be889f2d496efc65239c68f4aed436287d272"
	I1129 09:18:04.250716  350917 cri.go:89] found id: "b1168c95c222d7ead90133fcf186480b098d18439b28dae77675b1df0317dc77"
	I1129 09:18:04.250722  350917 cri.go:89] found id: "00c1cfc1e5404b627c278cb3aa524243f84e0940022fa9856a40f2180118e3da"
	I1129 09:18:04.250727  350917 cri.go:89] found id: "86f9aa5168cf43f40605f3e7fc7ef07afa72f313e7f427fb772bbccde2c8feb9"
	I1129 09:18:04.250731  350917 cri.go:89] found id: "668d94450587744ba0fea9e8fca8a95da8eb1024372b15cdc4023f63e16b8f81"
	I1129 09:18:04.250736  350917 cri.go:89] found id: "b910bdb65bdedc5ad424106b6aea90fdb221e9c9e03ce5e62c16682d9c219dbf"
	I1129 09:18:04.250740  350917 cri.go:89] found id: "d40c5061382593cad885d4b3c86be7a3641ec567ffe3cb652cfd84dd0c2396bf"
	I1129 09:18:04.250744  350917 cri.go:89] found id: "6ee1a1cef6abf99fe2be4154d33fa7e55335140b3c9fc7c979eabca17e682341"
	I1129 09:18:04.250748  350917 cri.go:89] found id: "062c767d0f027b4b3689a35cad7c6003a28dac146ef6a6e9732382f36ec71ffa"
	I1129 09:18:04.250765  350917 cri.go:89] found id: "f2694b730cdae9713e45775e584e5c29751a7dc48494b00ad67c3002310bbbcb"
	I1129 09:18:04.250774  350917 cri.go:89] found id: "9466ff8c42bf12177809c91484f8627a7ea39bef17beb3f8f5f5fbc14b260a39"
	I1129 09:18:04.250778  350917 cri.go:89] found id: ""
	I1129 09:18:04.250868  350917 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:18:04.265234  350917 retry.go:31] will retry after 344.235398ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:04Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:18:04.609767  350917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:18:04.623907  350917 pause.go:52] kubelet running: false
	I1129 09:18:04.623968  350917 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:18:04.784375  350917 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:18:04.784456  350917 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:18:04.857195  350917 cri.go:89] found id: "029d78d32a7ea14234ad87926a9be889f2d496efc65239c68f4aed436287d272"
	I1129 09:18:04.857221  350917 cri.go:89] found id: "b1168c95c222d7ead90133fcf186480b098d18439b28dae77675b1df0317dc77"
	I1129 09:18:04.857226  350917 cri.go:89] found id: "00c1cfc1e5404b627c278cb3aa524243f84e0940022fa9856a40f2180118e3da"
	I1129 09:18:04.857230  350917 cri.go:89] found id: "86f9aa5168cf43f40605f3e7fc7ef07afa72f313e7f427fb772bbccde2c8feb9"
	I1129 09:18:04.857234  350917 cri.go:89] found id: "668d94450587744ba0fea9e8fca8a95da8eb1024372b15cdc4023f63e16b8f81"
	I1129 09:18:04.857239  350917 cri.go:89] found id: "b910bdb65bdedc5ad424106b6aea90fdb221e9c9e03ce5e62c16682d9c219dbf"
	I1129 09:18:04.857243  350917 cri.go:89] found id: "d40c5061382593cad885d4b3c86be7a3641ec567ffe3cb652cfd84dd0c2396bf"
	I1129 09:18:04.857248  350917 cri.go:89] found id: "6ee1a1cef6abf99fe2be4154d33fa7e55335140b3c9fc7c979eabca17e682341"
	I1129 09:18:04.857252  350917 cri.go:89] found id: "062c767d0f027b4b3689a35cad7c6003a28dac146ef6a6e9732382f36ec71ffa"
	I1129 09:18:04.857261  350917 cri.go:89] found id: "f2694b730cdae9713e45775e584e5c29751a7dc48494b00ad67c3002310bbbcb"
	I1129 09:18:04.857267  350917 cri.go:89] found id: "9466ff8c42bf12177809c91484f8627a7ea39bef17beb3f8f5f5fbc14b260a39"
	I1129 09:18:04.857272  350917 cri.go:89] found id: ""
	I1129 09:18:04.857320  350917 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:18:04.872260  350917 out.go:203] 
	W1129 09:18:04.873354  350917 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:18:04.873386  350917 out.go:285] * 
	* 
	W1129 09:18:04.877566  350917 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:18:04.878837  350917 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-160987 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-160987
helpers_test.go:243: (dbg) docker inspect embed-certs-160987:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8",
	        "Created": "2025-11-29T09:15:55.293730055Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 336822,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:17:02.367334099Z",
	            "FinishedAt": "2025-11-29T09:17:00.918949985Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8/hosts",
	        "LogPath": "/var/lib/docker/containers/7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8/7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8-json.log",
	        "Name": "/embed-certs-160987",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-160987:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-160987",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8",
	                "LowerDir": "/var/lib/docker/overlay2/338bc42e1b80ba62e9fe902fb732aa26dedd5005037b5297154c97608cba7a83-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/338bc42e1b80ba62e9fe902fb732aa26dedd5005037b5297154c97608cba7a83/merged",
	                "UpperDir": "/var/lib/docker/overlay2/338bc42e1b80ba62e9fe902fb732aa26dedd5005037b5297154c97608cba7a83/diff",
	                "WorkDir": "/var/lib/docker/overlay2/338bc42e1b80ba62e9fe902fb732aa26dedd5005037b5297154c97608cba7a83/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-160987",
	                "Source": "/var/lib/docker/volumes/embed-certs-160987/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-160987",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-160987",
	                "name.minikube.sigs.k8s.io": "embed-certs-160987",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e8324bcf329a0f25cc97f5027d4d2be0438676e9e1ff92b80a2f2fff2536a848",
	            "SandboxKey": "/var/run/docker/netns/e8324bcf329a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-160987": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8f9ed915c5ff4babba294a5f95692de1cf5aa6f0db70276e7d083db5e7930b90",
	                    "EndpointID": "a2f69703b583f6ac1d1305e75301a3877d4819d6e5a7565a6dac2e6af7bcff44",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "2a:cd:8b:66:8a:0b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-160987",
	                        "7b45c51a2614"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-160987 -n embed-certs-160987
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-160987 -n embed-certs-160987: exit status 2 (354.950741ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-160987 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-160987 logs -n 25: (1.282009335s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-897274 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p embed-certs-160987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-632243 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-160987 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ stop    │ -p default-k8s-diff-port-632243 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-160987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-632243 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ old-k8s-version-680646 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-680646 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ no-preload-897274 image list --format=json                                                                                                                                                                                                    │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-897274 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-020433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:18 UTC │
	│ delete  │ -p no-preload-897274                                                                                                                                                                                                                          │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ delete  │ -p no-preload-897274                                                                                                                                                                                                                          │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ default-k8s-diff-port-632243 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p default-k8s-diff-port-632243 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-020433 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ image   │ embed-certs-160987 image list --format=json                                                                                                                                                                                                   │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ pause   │ -p embed-certs-160987 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-632243                                                                                                                                                                                                               │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ stop    │ -p newest-cni-020433 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:17:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:17:32.750525  343912 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:17:32.750831  343912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:32.750854  343912 out.go:374] Setting ErrFile to fd 2...
	I1129 09:17:32.750859  343912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:32.751040  343912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:17:32.751569  343912 out.go:368] Setting JSON to false
	I1129 09:17:32.753086  343912 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3605,"bootTime":1764404248,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:17:32.753155  343912 start.go:143] virtualization: kvm guest
	I1129 09:17:32.755163  343912 out.go:179] * [newest-cni-020433] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:17:32.756656  343912 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:17:32.756692  343912 notify.go:221] Checking for updates...
	I1129 09:17:32.759425  343912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:17:32.760722  343912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:32.765362  343912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:17:32.766699  343912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:17:32.768011  343912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:17:32.769812  343912 config.go:182] Loaded profile config "default-k8s-diff-port-632243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.769952  343912 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.770081  343912 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.770208  343912 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:17:32.794655  343912 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:17:32.794775  343912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:32.856269  343912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 09:17:32.845151576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:32.856388  343912 docker.go:319] overlay module found
	I1129 09:17:32.858258  343912 out.go:179] * Using the docker driver based on user configuration
	I1129 09:17:32.859415  343912 start.go:309] selected driver: docker
	I1129 09:17:32.859434  343912 start.go:927] validating driver "docker" against <nil>
	I1129 09:17:32.859451  343912 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:17:32.860352  343912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:32.930751  343912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 09:17:32.91839311 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:32.930951  343912 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1129 09:17:32.930985  343912 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1129 09:17:32.931224  343912 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:17:32.933425  343912 out.go:179] * Using Docker driver with root privileges
	I1129 09:17:32.934824  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:32.934925  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:32.934944  343912 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:17:32.935044  343912 start.go:353] cluster config:
	{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:32.936354  343912 out.go:179] * Starting "newest-cni-020433" primary control-plane node in "newest-cni-020433" cluster
	I1129 09:17:32.937514  343912 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:17:32.938803  343912 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:17:32.940016  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:32.940051  343912 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:17:32.940062  343912 cache.go:65] Caching tarball of preloaded images
	I1129 09:17:32.940107  343912 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:17:32.940163  343912 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:17:32.940176  343912 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:17:32.940278  343912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:17:32.940301  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json: {Name:mk7d4da653b0e884b27837053cd3d354c3ff76e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:32.963727  343912 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:17:32.963754  343912 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:17:32.963777  343912 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:17:32.963830  343912 start.go:360] acquireMachinesLock for newest-cni-020433: {Name:mk6347901682a01c9d317c6a402722ce1e16792e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:17:32.963998  343912 start.go:364] duration metric: took 95.455µs to acquireMachinesLock for "newest-cni-020433"
	I1129 09:17:32.964029  343912 start.go:93] Provisioning new machine with config: &{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:17:32.964128  343912 start.go:125] createHost starting for "" (driver="docker")
	W1129 09:17:33.828970  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:35.829789  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:33.948277  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:36.448064  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	I1129 09:17:32.965989  343912 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:17:32.966316  343912 start.go:159] libmachine.API.Create for "newest-cni-020433" (driver="docker")
	I1129 09:17:32.966356  343912 client.go:173] LocalClient.Create starting
	I1129 09:17:32.966470  343912 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem
	I1129 09:17:32.966524  343912 main.go:143] libmachine: Decoding PEM data...
	I1129 09:17:32.966555  343912 main.go:143] libmachine: Parsing certificate...
	I1129 09:17:32.966626  343912 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem
	I1129 09:17:32.966654  343912 main.go:143] libmachine: Decoding PEM data...
	I1129 09:17:32.966670  343912 main.go:143] libmachine: Parsing certificate...
	I1129 09:17:32.967123  343912 cli_runner.go:164] Run: docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:17:32.987734  343912 cli_runner.go:211] docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:17:32.987872  343912 network_create.go:284] running [docker network inspect newest-cni-020433] to gather additional debugging logs...
	I1129 09:17:32.987905  343912 cli_runner.go:164] Run: docker network inspect newest-cni-020433
	W1129 09:17:33.007164  343912 cli_runner.go:211] docker network inspect newest-cni-020433 returned with exit code 1
	I1129 09:17:33.007194  343912 network_create.go:287] error running [docker network inspect newest-cni-020433]: docker network inspect newest-cni-020433: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-020433 not found
	I1129 09:17:33.007209  343912 network_create.go:289] output of [docker network inspect newest-cni-020433]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-020433 not found
	
	** /stderr **
	I1129 09:17:33.007343  343912 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:17:33.027663  343912 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-94fc752bc7a7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ed:43:e0:ad:5a} reservation:<nil>}
	I1129 09:17:33.028420  343912 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4cfc302f5d5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:73:ac:ba:18:bb} reservation:<nil>}
	I1129 09:17:33.029339  343912 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-05a73bbe16b8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:a9:af:00:78:ac} reservation:<nil>}
	I1129 09:17:33.030217  343912 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb6cd0}
	I1129 09:17:33.030243  343912 network_create.go:124] attempt to create docker network newest-cni-020433 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1129 09:17:33.030303  343912 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-020433 newest-cni-020433
	I1129 09:17:33.088543  343912 network_create.go:108] docker network newest-cni-020433 192.168.76.0/24 created
	I1129 09:17:33.088582  343912 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-020433" container
	I1129 09:17:33.088651  343912 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:17:33.110031  343912 cli_runner.go:164] Run: docker volume create newest-cni-020433 --label name.minikube.sigs.k8s.io=newest-cni-020433 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:17:33.131986  343912 oci.go:103] Successfully created a docker volume newest-cni-020433
	I1129 09:17:33.132086  343912 cli_runner.go:164] Run: docker run --rm --name newest-cni-020433-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-020433 --entrypoint /usr/bin/test -v newest-cni-020433:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:17:33.542784  343912 oci.go:107] Successfully prepared a docker volume newest-cni-020433
	I1129 09:17:33.542890  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:33.542904  343912 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:17:33.542963  343912 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-020433:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1129 09:17:38.328506  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:40.827427  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:38.452229  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:40.947913  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	I1129 09:17:38.398985  343912 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-020433:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.855972089s)
	I1129 09:17:38.399017  343912 kic.go:203] duration metric: took 4.856111068s to extract preloaded images to volume ...
	W1129 09:17:38.399145  343912 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:17:38.399190  343912 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:17:38.399238  343912 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:17:38.467132  343912 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-020433 --name newest-cni-020433 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-020433 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-020433 --network newest-cni-020433 --ip 192.168.76.2 --volume newest-cni-020433:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:17:39.064807  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Running}}
	I1129 09:17:39.085951  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:17:39.108652  343912 cli_runner.go:164] Run: docker exec newest-cni-020433 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:17:39.159933  343912 oci.go:144] the created container "newest-cni-020433" has a running status.
	I1129 09:17:39.159970  343912 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa...
	I1129 09:17:39.228797  343912 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:17:39.262675  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:17:39.285576  343912 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:17:39.285600  343912 kic_runner.go:114] Args: [docker exec --privileged newest-cni-020433 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:17:39.349410  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:17:39.369689  343912 machine.go:94] provisionDockerMachine start ...
	I1129 09:17:39.369803  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:39.396522  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:39.396932  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:39.396965  343912 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:17:39.397982  343912 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 09:17:42.550448  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-020433
	
	I1129 09:17:42.550474  343912 ubuntu.go:182] provisioning hostname "newest-cni-020433"
	I1129 09:17:42.550527  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:42.572133  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:42.572440  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:42.572461  343912 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-020433 && echo "newest-cni-020433" | sudo tee /etc/hostname
	I1129 09:17:42.733805  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-020433
	
	I1129 09:17:42.733897  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:42.754783  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:42.755144  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:42.755173  343912 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-020433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-020433/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-020433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:17:42.901064  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:17:42.901098  343912 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:17:42.901148  343912 ubuntu.go:190] setting up certificates
	I1129 09:17:42.901161  343912 provision.go:84] configureAuth start
	I1129 09:17:42.901231  343912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:17:42.921161  343912 provision.go:143] copyHostCerts
	I1129 09:17:42.921240  343912 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:17:42.921253  343912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:17:42.921344  343912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:17:42.921497  343912 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:17:42.921509  343912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:17:42.921568  343912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:17:42.921658  343912 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:17:42.921666  343912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:17:42.921693  343912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:17:42.921761  343912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.newest-cni-020433 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-020433]
	I1129 09:17:43.032466  343912 provision.go:177] copyRemoteCerts
	I1129 09:17:43.032525  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:17:43.032558  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.052823  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.158233  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:17:43.179138  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:17:43.198311  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:17:43.217652  343912 provision.go:87] duration metric: took 316.475572ms to configureAuth
	I1129 09:17:43.217682  343912 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:17:43.217917  343912 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:43.218034  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.237980  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:43.238211  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:43.238225  343912 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:17:43.535016  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:17:43.535041  343912 machine.go:97] duration metric: took 4.165320057s to provisionDockerMachine
	I1129 09:17:43.535052  343912 client.go:176] duration metric: took 10.568687757s to LocalClient.Create
	I1129 09:17:43.535073  343912 start.go:167] duration metric: took 10.568756916s to libmachine.API.Create "newest-cni-020433"
	I1129 09:17:43.535083  343912 start.go:293] postStartSetup for "newest-cni-020433" (driver="docker")
	I1129 09:17:43.535095  343912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:17:43.535160  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:17:43.535203  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.554574  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.661234  343912 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:17:43.665051  343912 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:17:43.665086  343912 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:17:43.665114  343912 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:17:43.665186  343912 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:17:43.665301  343912 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:17:43.665409  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:17:43.674165  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:43.696383  343912 start.go:296] duration metric: took 161.286243ms for postStartSetup
	I1129 09:17:43.696751  343912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:17:43.716301  343912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:17:43.716589  343912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:17:43.716640  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.735518  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.835307  343912 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:17:43.840211  343912 start.go:128] duration metric: took 10.876067654s to createHost
	I1129 09:17:43.840237  343912 start.go:83] releasing machines lock for "newest-cni-020433", held for 10.876224942s
	I1129 09:17:43.840309  343912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:17:43.860942  343912 ssh_runner.go:195] Run: cat /version.json
	I1129 09:17:43.860995  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.861019  343912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:17:43.861110  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.881396  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.881825  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:44.035348  343912 ssh_runner.go:195] Run: systemctl --version
	I1129 09:17:44.042398  343912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:17:44.079667  343912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:17:44.084668  343912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:17:44.084747  343912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:17:44.112611  343912 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:17:44.112638  343912 start.go:496] detecting cgroup driver to use...
	I1129 09:17:44.112675  343912 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:17:44.112721  343912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:17:44.130191  343912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:17:44.143333  343912 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:17:44.143407  343912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:17:44.160522  343912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:17:44.179005  343912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:17:44.264507  343912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:17:44.361596  343912 docker.go:234] disabling docker service ...
	I1129 09:17:44.361665  343912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:17:44.385098  343912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:17:44.399261  343912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:17:44.490353  343912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:17:44.577339  343912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:17:44.590606  343912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:17:44.606040  343912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:17:44.606113  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.617850  343912 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:17:44.617930  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.627795  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.637388  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.647881  343912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:17:44.657593  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.667667  343912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.683312  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.693180  343912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:17:44.701299  343912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:17:44.709519  343912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:44.789707  343912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:17:44.946719  343912 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:17:44.946786  343912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:17:44.950988  343912 start.go:564] Will wait 60s for crictl version
	I1129 09:17:44.951061  343912 ssh_runner.go:195] Run: which crictl
	I1129 09:17:44.954897  343912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:17:44.981273  343912 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:17:44.981355  343912 ssh_runner.go:195] Run: crio --version
	I1129 09:17:45.010241  343912 ssh_runner.go:195] Run: crio --version
	I1129 09:17:45.041932  343912 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:17:45.043598  343912 cli_runner.go:164] Run: docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:17:45.064493  343912 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 09:17:45.068916  343912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:45.081636  343912 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1129 09:17:43.447332  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	I1129 09:17:44.449613  336858 pod_ready.go:94] pod "coredns-66bc5c9577-z4m7c" is "Ready"
	I1129 09:17:44.449647  336858 pod_ready.go:86] duration metric: took 31.007906695s for pod "coredns-66bc5c9577-z4m7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.452244  336858 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.456751  336858 pod_ready.go:94] pod "etcd-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:44.456779  336858 pod_ready.go:86] duration metric: took 4.509231ms for pod "etcd-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.458972  336858 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.464014  336858 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:44.464045  336858 pod_ready.go:86] duration metric: took 5.045626ms for pod "kube-apiserver-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.466444  336858 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.645988  336858 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:44.646021  336858 pod_ready.go:86] duration metric: took 179.551463ms for pod "kube-controller-manager-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.845460  336858 pod_ready.go:83] waiting for pod "kube-proxy-p2nf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.245518  336858 pod_ready.go:94] pod "kube-proxy-p2nf7" is "Ready"
	I1129 09:17:45.245548  336858 pod_ready.go:86] duration metric: took 400.053767ms for pod "kube-proxy-p2nf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.445969  336858 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.847024  336858 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:45.847054  336858 pod_ready.go:86] duration metric: took 401.056115ms for pod "kube-scheduler-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.847067  336858 pod_ready.go:40] duration metric: took 32.409409019s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:45.894722  336858 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:17:45.896514  336858 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-632243" cluster and "default" namespace by default
	W1129 09:17:42.828310  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:44.828378  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	I1129 09:17:45.082734  343912 kubeadm.go:884] updating cluster {Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:17:45.082902  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:45.082966  343912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:45.116711  343912 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:45.116737  343912 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:17:45.116794  343912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:45.143455  343912 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:45.143477  343912 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:17:45.143484  343912 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 09:17:45.143562  343912 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-020433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:17:45.143624  343912 ssh_runner.go:195] Run: crio config
	I1129 09:17:45.191199  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:45.191226  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:45.191244  343912 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1129 09:17:45.191264  343912 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-020433 NodeName:newest-cni-020433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:17:45.191372  343912 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-020433"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:17:45.191438  343912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:17:45.199969  343912 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:17:45.200043  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:17:45.208777  343912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 09:17:45.222978  343912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:17:45.238915  343912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1129 09:17:45.253505  343912 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:17:45.257546  343912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:45.269034  343912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:45.354518  343912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:17:45.382355  343912 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433 for IP: 192.168.76.2
	I1129 09:17:45.382379  343912 certs.go:195] generating shared ca certs ...
	I1129 09:17:45.382407  343912 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.382577  343912 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:17:45.382636  343912 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:17:45.382650  343912 certs.go:257] generating profile certs ...
	I1129 09:17:45.382718  343912 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key
	I1129 09:17:45.382739  343912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.crt with IP's: []
	I1129 09:17:45.531926  343912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.crt ...
	I1129 09:17:45.531957  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.crt: {Name:mkeb17feaf8ba6750a01bd0a1f0441d4154bc65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.532140  343912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key ...
	I1129 09:17:45.532151  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key: {Name:mke1454a7dc3fbfdd29bdb836050690bcbb7394e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.532230  343912 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70
	I1129 09:17:45.532247  343912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1129 09:17:45.624876  343912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70 ...
	I1129 09:17:45.624908  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70: {Name:mk7ef25787741e084b6a866e43c94e1e8fef637a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.625077  343912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70 ...
	I1129 09:17:45.625090  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70: {Name:mk1ecd69640eeb4a11bb5f1e1ff7ab99459cb558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.625222  343912 certs.go:382] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70 -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt
	I1129 09:17:45.625303  343912 certs.go:386] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70 -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key
	I1129 09:17:45.625381  343912 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key
	I1129 09:17:45.625401  343912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt with IP's: []
	I1129 09:17:45.648826  343912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt ...
	I1129 09:17:45.648864  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt: {Name:mk66c6222d92d3d2bb033717f49fc6858d0a9367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.649040  343912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key ...
	I1129 09:17:45.649052  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key: {Name:mk559719a3cba034552025e578cadb28054704f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.649223  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:17:45.649259  343912 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:17:45.649269  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:17:45.649291  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:17:45.649314  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:17:45.649337  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:17:45.649376  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:45.649920  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:17:45.669435  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:17:45.688777  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:17:45.707612  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:17:45.726954  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:17:45.745570  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 09:17:45.763773  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:17:45.781717  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:17:45.799936  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:17:45.820108  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:17:45.839214  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:17:45.859643  343912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:17:45.874007  343912 ssh_runner.go:195] Run: openssl version
	I1129 09:17:45.880775  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:17:45.890438  343912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:17:45.894494  343912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:17:45.894554  343912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:17:45.934499  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:17:45.944013  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:17:45.953676  343912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:17:45.957999  343912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:17:45.958047  343912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:17:45.998219  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:17:46.008105  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:17:46.018512  343912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:46.022778  343912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:46.022855  343912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:46.060278  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:17:46.069685  343912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:17:46.073627  343912 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:17:46.073677  343912 kubeadm.go:401] StartCluster: {Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:46.073751  343912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:17:46.073796  343912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:17:46.102729  343912 cri.go:89] found id: ""
	I1129 09:17:46.102806  343912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:17:46.111499  343912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:17:46.120045  343912 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:17:46.120110  343912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:17:46.128326  343912 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:17:46.128366  343912 kubeadm.go:158] found existing configuration files:
	
	I1129 09:17:46.128413  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:17:46.136677  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:17:46.136741  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:17:46.144727  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:17:46.152908  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:17:46.152971  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:17:46.161300  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:17:46.170050  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:17:46.170117  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:17:46.179094  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:17:46.190258  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:17:46.190325  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:17:46.200333  343912 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:17:46.284775  343912 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 09:17:46.350549  343912 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1129 09:17:47.327775  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:49.327943  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	I1129 09:17:49.827724  336547 pod_ready.go:94] pod "coredns-66bc5c9577-ptx67" is "Ready"
	I1129 09:17:49.827757  336547 pod_ready.go:86] duration metric: took 36.505830154s for pod "coredns-66bc5c9577-ptx67" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.830193  336547 pod_ready.go:83] waiting for pod "etcd-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.834087  336547 pod_ready.go:94] pod "etcd-embed-certs-160987" is "Ready"
	I1129 09:17:49.834117  336547 pod_ready.go:86] duration metric: took 3.892584ms for pod "etcd-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.836236  336547 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.840124  336547 pod_ready.go:94] pod "kube-apiserver-embed-certs-160987" is "Ready"
	I1129 09:17:49.840148  336547 pod_ready.go:86] duration metric: took 3.889352ms for pod "kube-apiserver-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.842042  336547 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.026423  336547 pod_ready.go:94] pod "kube-controller-manager-embed-certs-160987" is "Ready"
	I1129 09:17:50.026453  336547 pod_ready.go:86] duration metric: took 184.390727ms for pod "kube-controller-manager-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.225618  336547 pod_ready.go:83] waiting for pod "kube-proxy-57l9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.626123  336547 pod_ready.go:94] pod "kube-proxy-57l9h" is "Ready"
	I1129 09:17:50.626149  336547 pod_ready.go:86] duration metric: took 400.500945ms for pod "kube-proxy-57l9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.826449  336547 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:51.226295  336547 pod_ready.go:94] pod "kube-scheduler-embed-certs-160987" is "Ready"
	I1129 09:17:51.226329  336547 pod_ready.go:86] duration metric: took 399.854281ms for pod "kube-scheduler-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:51.226346  336547 pod_ready.go:40] duration metric: took 37.909395781s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:51.285055  336547 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:17:51.286778  336547 out.go:179] * Done! kubectl is now configured to use "embed-certs-160987" cluster and "default" namespace by default
	I1129 09:17:56.491067  343912 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:17:56.491128  343912 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:17:56.491204  343912 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:17:56.491252  343912 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:17:56.491321  343912 kubeadm.go:319] OS: Linux
	I1129 09:17:56.491400  343912 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:17:56.491441  343912 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:17:56.491502  343912 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:17:56.491558  343912 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:17:56.491602  343912 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:17:56.491642  343912 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:17:56.491683  343912 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:17:56.491733  343912 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:17:56.491834  343912 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:17:56.491984  343912 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:17:56.492110  343912 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:17:56.492184  343912 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:17:56.493947  343912 out.go:252]   - Generating certificates and keys ...
	I1129 09:17:56.494037  343912 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:17:56.494134  343912 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:17:56.494235  343912 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:17:56.494315  343912 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:17:56.494392  343912 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:17:56.494466  343912 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:17:56.494546  343912 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:17:56.494718  343912 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-020433] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:17:56.494781  343912 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:17:56.494923  343912 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-020433] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:17:56.495006  343912 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:17:56.495078  343912 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:17:56.495157  343912 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:17:56.495234  343912 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:17:56.495280  343912 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:17:56.495370  343912 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:17:56.495457  343912 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:17:56.495570  343912 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:17:56.495624  343912 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:17:56.495696  343912 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:17:56.495760  343912 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:17:56.497322  343912 out.go:252]   - Booting up control plane ...
	I1129 09:17:56.497460  343912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:17:56.497563  343912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:17:56.497652  343912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:17:56.497741  343912 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:17:56.497818  343912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:17:56.497976  343912 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:17:56.498111  343912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:17:56.498169  343912 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:17:56.498335  343912 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:17:56.498461  343912 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:17:56.498530  343912 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.935954ms
	I1129 09:17:56.498616  343912 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:17:56.498731  343912 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1129 09:17:56.498879  343912 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:17:56.498988  343912 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 09:17:56.499073  343912 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.504475511s
	I1129 09:17:56.499172  343912 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.695464789s
	I1129 09:17:56.499266  343912 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501879872s
	I1129 09:17:56.499440  343912 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:17:56.499624  343912 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:17:56.499691  343912 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:17:56.500020  343912 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-020433 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:17:56.500135  343912 kubeadm.go:319] [bootstrap-token] Using token: f82gs2.l4bciq1r030lvxp0
	I1129 09:17:56.501325  343912 out.go:252]   - Configuring RBAC rules ...
	I1129 09:17:56.501453  343912 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:17:56.501553  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:17:56.501684  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:17:56.501866  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:17:56.502025  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:17:56.502108  343912 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:17:56.502227  343912 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:17:56.502273  343912 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:17:56.502315  343912 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:17:56.502321  343912 kubeadm.go:319] 
	I1129 09:17:56.502376  343912 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:17:56.502381  343912 kubeadm.go:319] 
	I1129 09:17:56.502451  343912 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:17:56.502460  343912 kubeadm.go:319] 
	I1129 09:17:56.502481  343912 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:17:56.502532  343912 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:17:56.502576  343912 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:17:56.502586  343912 kubeadm.go:319] 
	I1129 09:17:56.502629  343912 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:17:56.502639  343912 kubeadm.go:319] 
	I1129 09:17:56.502689  343912 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:17:56.502697  343912 kubeadm.go:319] 
	I1129 09:17:56.502745  343912 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:17:56.502810  343912 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:17:56.502890  343912 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:17:56.502897  343912 kubeadm.go:319] 
	I1129 09:17:56.502971  343912 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:17:56.503057  343912 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:17:56.503064  343912 kubeadm.go:319] 
	I1129 09:17:56.503140  343912 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f82gs2.l4bciq1r030lvxp0 \
	I1129 09:17:56.503224  343912 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 \
	I1129 09:17:56.503244  343912 kubeadm.go:319] 	--control-plane 
	I1129 09:17:56.503252  343912 kubeadm.go:319] 
	I1129 09:17:56.503335  343912 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:17:56.503344  343912 kubeadm.go:319] 
	I1129 09:17:56.503417  343912 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f82gs2.l4bciq1r030lvxp0 \
	I1129 09:17:56.503523  343912 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 
	I1129 09:17:56.503547  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:56.503557  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:56.504793  343912 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:17:56.505922  343912 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:17:56.510364  343912 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 09:17:56.510383  343912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:17:56.523891  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:17:56.771723  343912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:17:56.771759  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:56.771857  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-020433 minikube.k8s.io/updated_at=2025_11_29T09_17_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=newest-cni-020433 minikube.k8s.io/primary=true
	I1129 09:17:56.870386  343912 ops.go:34] apiserver oom_adj: -16
	I1129 09:17:56.870493  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:57.370894  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:57.870685  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:58.370644  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:58.870909  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:59.371245  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:59.871577  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:00.370624  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:00.871043  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:01.370798  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:01.470229  343912 kubeadm.go:1114] duration metric: took 4.69851702s to wait for elevateKubeSystemPrivileges
	I1129 09:18:01.470353  343912 kubeadm.go:403] duration metric: took 15.396675728s to StartCluster
	I1129 09:18:01.470403  343912 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:01.470526  343912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:18:01.473161  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:01.473501  343912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:18:01.473529  343912 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:18:01.473595  343912 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-020433"
	I1129 09:18:01.473611  343912 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-020433"
	I1129 09:18:01.473639  343912 host.go:66] Checking if "newest-cni-020433" exists ...
	I1129 09:18:01.473786  343912 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:01.473872  343912 addons.go:70] Setting default-storageclass=true in profile "newest-cni-020433"
	I1129 09:18:01.473890  343912 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-020433"
	I1129 09:18:01.473505  343912 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:18:01.474234  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:01.474263  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:01.477129  343912 out.go:179] * Verifying Kubernetes components...
	I1129 09:18:01.478510  343912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:18:01.508488  343912 addons.go:239] Setting addon default-storageclass=true in "newest-cni-020433"
	I1129 09:18:01.508544  343912 host.go:66] Checking if "newest-cni-020433" exists ...
	I1129 09:18:01.509017  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:01.512765  343912 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:18:01.513878  343912 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:18:01.513901  343912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:18:01.513969  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:01.548536  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:01.549743  343912 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:18:01.549766  343912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:18:01.549824  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:01.577630  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:01.603306  343912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:18:01.652699  343912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:18:01.679084  343912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:18:01.710552  343912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:18:01.806299  343912 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1129 09:18:01.808103  343912 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:18:01.808185  343912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:18:02.050418  343912 api_server.go:72] duration metric: took 576.481112ms to wait for apiserver process to appear ...
	I1129 09:18:02.050443  343912 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:18:02.050462  343912 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:18:02.057555  343912 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:18:02.058665  343912 api_server.go:141] control plane version: v1.34.1
	I1129 09:18:02.058689  343912 api_server.go:131] duration metric: took 8.238938ms to wait for apiserver health ...
	I1129 09:18:02.058698  343912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:18:02.059528  343912 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 09:18:02.062186  343912 addons.go:530] duration metric: took 588.650166ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:18:02.062399  343912 system_pods.go:59] 8 kube-system pods found
	I1129 09:18:02.062440  343912 system_pods.go:61] "coredns-66bc5c9577-h8nqv" [c8cbc934-0df3-44c5-a3d7-fff7ca54ef86] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 09:18:02.062454  343912 system_pods.go:61] "etcd-newest-cni-020433" [47991984-6243-463b-9cda-95d0e18b6092] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:18:02.062465  343912 system_pods.go:61] "kindnet-gxgwn" [7e13d750-7bcf-4e2a-9663-512ecc23781a] Running
	I1129 09:18:02.062474  343912 system_pods.go:61] "kube-apiserver-newest-cni-020433" [20641eff-ff31-4e31-8983-1075116bcdd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:18:02.062488  343912 system_pods.go:61] "kube-controller-manager-newest-cni-020433" [f5bece62-e41a-4cf6-bacc-29d4dd0754cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:18:02.062494  343912 system_pods.go:61] "kube-proxy-nqwzp" [118d6bdc-5c33-4ab5-bee8-6f8a3447c461] Running
	I1129 09:18:02.062507  343912 system_pods.go:61] "kube-scheduler-newest-cni-020433" [3224b587-95a1-4963-88ae-af38a3bd1d84] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:18:02.062523  343912 system_pods.go:61] "storage-provisioner" [30a16c03-a054-435c-8eec-ce64486eb6c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 09:18:02.062532  343912 system_pods.go:74] duration metric: took 3.827683ms to wait for pod list to return data ...
	I1129 09:18:02.062545  343912 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:18:02.065169  343912 default_sa.go:45] found service account: "default"
	I1129 09:18:02.065192  343912 default_sa.go:55] duration metric: took 2.641298ms for default service account to be created ...
	I1129 09:18:02.065203  343912 kubeadm.go:587] duration metric: took 591.270549ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:18:02.065217  343912 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:18:02.068226  343912 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:18:02.068253  343912 node_conditions.go:123] node cpu capacity is 8
	I1129 09:18:02.068266  343912 node_conditions.go:105] duration metric: took 3.045433ms to run NodePressure ...
	I1129 09:18:02.068278  343912 start.go:242] waiting for startup goroutines ...
	I1129 09:18:02.310908  343912 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-020433" context rescaled to 1 replicas
	I1129 09:18:02.310945  343912 start.go:247] waiting for cluster config update ...
	I1129 09:18:02.310956  343912 start.go:256] writing updated cluster config ...
	I1129 09:18:02.311264  343912 ssh_runner.go:195] Run: rm -f paused
	I1129 09:18:02.364750  343912 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:18:02.367739  343912 out.go:179] * Done! kubectl is now configured to use "newest-cni-020433" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 09:17:35 embed-certs-160987 crio[572]: time="2025-11-29T09:17:35.811632916Z" level=info msg="Started container" PID=1766 containerID=587f34901e932b9466dca1fe795751ef932a318b09a3565a13718b68bad71b19 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj/dashboard-metrics-scraper id=5d9813dd-4e54-43e9-90ec-44112f0dff2e name=/runtime.v1.RuntimeService/StartContainer sandboxID=b83de4b5933becfff6eb105dfbd3f53faebe7bb718751bd1ff867871c0486f9e
	Nov 29 09:17:35 embed-certs-160987 crio[572]: time="2025-11-29T09:17:35.883158776Z" level=info msg="Removing container: 37aa5596c65f3b59d3ae1bddb9b6fc07f753d35a908edddbe3d59b1d949e62f9" id=87a05e0d-55c3-4366-9630-be0f445243fc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:35 embed-certs-160987 crio[572]: time="2025-11-29T09:17:35.8955723Z" level=info msg="Removed container 37aa5596c65f3b59d3ae1bddb9b6fc07f753d35a908edddbe3d59b1d949e62f9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj/dashboard-metrics-scraper" id=87a05e0d-55c3-4366-9630-be0f445243fc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.906462136Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b17b2e75-05b5-457f-9f79-5194f067cbaa name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.907371825Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=df1a1304-e2a1-4bb6-a784-f2bce87b215b name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.908499853Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=309d4bba-e05a-4e08-becd-570d8be6213e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.90862294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.913442576Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.913644908Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e5acf427db1c00ef6752614fc1b17a91d751a00cb901896211603ee35c0c540d/merged/etc/passwd: no such file or directory"
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.913699497Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e5acf427db1c00ef6752614fc1b17a91d751a00cb901896211603ee35c0c540d/merged/etc/group: no such file or directory"
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.91401802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.951156462Z" level=info msg="Created container 029d78d32a7ea14234ad87926a9be889f2d496efc65239c68f4aed436287d272: kube-system/storage-provisioner/storage-provisioner" id=309d4bba-e05a-4e08-becd-570d8be6213e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.951838275Z" level=info msg="Starting container: 029d78d32a7ea14234ad87926a9be889f2d496efc65239c68f4aed436287d272" id=9472dcce-1d34-4256-b62c-e6f8e358be56 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.954142801Z" level=info msg="Started container" PID=1780 containerID=029d78d32a7ea14234ad87926a9be889f2d496efc65239c68f4aed436287d272 description=kube-system/storage-provisioner/storage-provisioner id=9472dcce-1d34-4256-b62c-e6f8e358be56 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b6874fa8bf04e06691a7ad263dd2997d2c2202c554a48668b6b58753e0910805
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.762980469Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7d72eec4-7186-49aa-92a0-43abc4f3d756 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.764099714Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cc9ab8e5-b9a4-47b1-8e21-1b2387d809d9 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.76533776Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj/dashboard-metrics-scraper" id=cbb8161b-1066-4b21-a8b1-ddccbf390546 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.765511239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.771517461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.772216967Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.814511135Z" level=info msg="Created container f2694b730cdae9713e45775e584e5c29751a7dc48494b00ad67c3002310bbbcb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj/dashboard-metrics-scraper" id=cbb8161b-1066-4b21-a8b1-ddccbf390546 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.815706631Z" level=info msg="Starting container: f2694b730cdae9713e45775e584e5c29751a7dc48494b00ad67c3002310bbbcb" id=f6e4d103-9c0d-49c0-bb04-8116f4727d98 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.818248274Z" level=info msg="Started container" PID=1816 containerID=f2694b730cdae9713e45775e584e5c29751a7dc48494b00ad67c3002310bbbcb description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj/dashboard-metrics-scraper id=f6e4d103-9c0d-49c0-bb04-8116f4727d98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b83de4b5933becfff6eb105dfbd3f53faebe7bb718751bd1ff867871c0486f9e
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.942274588Z" level=info msg="Removing container: 587f34901e932b9466dca1fe795751ef932a318b09a3565a13718b68bad71b19" id=ad756c45-a2b6-4276-a00b-aa9313c1a260 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.952255178Z" level=info msg="Removed container 587f34901e932b9466dca1fe795751ef932a318b09a3565a13718b68bad71b19: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj/dashboard-metrics-scraper" id=ad756c45-a2b6-4276-a00b-aa9313c1a260 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f2694b730cdae       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   b83de4b5933be       dashboard-metrics-scraper-6ffb444bf9-98zlj   kubernetes-dashboard
	029d78d32a7ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   b6874fa8bf04e       storage-provisioner                          kube-system
	9466ff8c42bf1       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   399d73db81e98       kubernetes-dashboard-855c9754f9-97f9m        kubernetes-dashboard
	b1168c95c222d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   1fc2833d9c797       coredns-66bc5c9577-ptx67                     kube-system
	b086130dc9200       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   82f73c2cc547a       busybox                                      default
	00c1cfc1e5404       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   69a59b9356597       kube-proxy-57l9h                             kube-system
	86f9aa5168cf4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   b6874fa8bf04e       storage-provisioner                          kube-system
	668d944505877       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   4b714f98404d9       kindnet-cvmj6                                kube-system
	b910bdb65bded       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   2dc3587a12637       kube-scheduler-embed-certs-160987            kube-system
	d40c506138259       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   85ae4f4143ba7       kube-controller-manager-embed-certs-160987   kube-system
	6ee1a1cef6abf       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   bb3d9b96a42ac       kube-apiserver-embed-certs-160987            kube-system
	062c767d0f027       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   7cd248abc3317       etcd-embed-certs-160987                      kube-system
	
	
	==> coredns [b1168c95c222d7ead90133fcf186480b098d18439b28dae77675b1df0317dc77] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57185 - 28717 "HINFO IN 2815726449020211220.1603990658617242147. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.038845393s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-160987
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-160987
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=embed-certs-160987
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_16_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:16:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-160987
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:18:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-160987
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                01febc21-6293-4ce5-852c-5d2b1b91b577
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-ptx67                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-160987                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-cvmj6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-embed-certs-160987             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-160987    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-57l9h                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-embed-certs-160987             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-98zlj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-97f9m         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node embed-certs-160987 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node embed-certs-160987 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node embed-certs-160987 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           108s               node-controller  Node embed-certs-160987 event: Registered Node embed-certs-160987 in Controller
	  Normal  NodeReady                96s                kubelet          Node embed-certs-160987 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node embed-certs-160987 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node embed-certs-160987 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node embed-certs-160987 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node embed-certs-160987 event: Registered Node embed-certs-160987 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 e1 87 e4 84 ae 08 06
	[  +0.002640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[ +14.778105] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +13.520989] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 32 aa eb 52 77 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +14.762906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df ae 93 da f6 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[Nov29 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[ +13.297626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	[  +7.390906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 68 f3 3d 3c e9 08 06
	[  +0.000436] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[  +7.145922] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea c3 2f 2a a3 01 08 06
	[  +0.000415] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	
	
	==> etcd [062c767d0f027b4b3689a35cad7c6003a28dac146ef6a6e9732382f36ec71ffa] <==
	{"level":"warn","ts":"2025-11-29T09:17:10.843095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.852521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.864697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.875321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.889932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.903570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.913203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.922869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.942674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.953278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.964371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.975672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.983549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.993018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.012626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.023202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.035243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.048294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.058477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.069458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.082444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.092376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.103152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.110991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.196574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34398","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:06 up  1:00,  0 user,  load average: 2.97, 3.66, 2.49
	Linux embed-certs-160987 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [668d94450587744ba0fea9e8fca8a95da8eb1024372b15cdc4023f63e16b8f81] <==
	I1129 09:17:13.375969       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:17:13.376243       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 09:17:13.376426       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:17:13.376442       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:17:13.376463       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:17:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:17:13.581939       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:17:13.581960       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:17:13.581968       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:17:13.582241       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:17:14.013869       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:17:14.013989       1 metrics.go:72] Registering metrics
	I1129 09:17:14.014359       1 controller.go:711] "Syncing nftables rules"
	I1129 09:17:23.581566       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:17:23.581669       1 main.go:301] handling current node
	I1129 09:17:33.585976       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:17:33.586037       1 main.go:301] handling current node
	I1129 09:17:43.582553       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:17:43.582588       1 main.go:301] handling current node
	I1129 09:17:53.582199       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:17:53.582242       1 main.go:301] handling current node
	I1129 09:18:03.586933       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:18:03.586979       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6ee1a1cef6abf99fe2be4154d33fa7e55335140b3c9fc7c979eabca17e682341] <==
	I1129 09:17:11.869197       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:17:11.869240       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1129 09:17:11.872869       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 09:17:11.873199       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 09:17:11.873282       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 09:17:11.873262       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1129 09:17:11.896724       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 09:17:11.909168       1 cache.go:39] Caches are synced for autoregister controller
	I1129 09:17:11.909380       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 09:17:11.910446       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1129 09:17:11.910564       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:17:11.925184       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 09:17:11.933536       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:17:12.472981       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:17:12.517399       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:17:12.560981       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:17:12.577197       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:17:12.591506       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:17:12.664504       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.74.233"}
	I1129 09:17:12.680519       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.177.131"}
	I1129 09:17:12.772520       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:17:15.253649       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:17:15.253693       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:17:15.355351       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:17:15.402722       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d40c5061382593cad885d4b3c86be7a3641ec567ffe3cb652cfd84dd0c2396bf] <==
	I1129 09:17:14.839644       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:17:14.843978       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:17:14.846325       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:17:14.849644       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:17:14.849673       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 09:17:14.849679       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:17:14.849717       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:17:14.849736       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:17:14.849750       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:17:14.849760       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 09:17:14.849765       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:17:14.849776       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 09:17:14.849800       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:17:14.849917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 09:17:14.849935       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-160987"
	I1129 09:17:14.849980       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1129 09:17:14.850109       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 09:17:14.851374       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 09:17:14.851443       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 09:17:14.854030       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:17:14.854035       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:17:14.856026       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:17:14.857109       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:17:14.859401       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 09:17:14.876889       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [00c1cfc1e5404b627c278cb3aa524243f84e0940022fa9856a40f2180118e3da] <==
	I1129 09:17:13.213345       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:17:13.290090       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:17:13.390695       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:17:13.390744       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1129 09:17:13.390895       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:17:13.413270       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:17:13.413361       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:17:13.419545       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:17:13.420082       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:17:13.420127       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:17:13.421456       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:17:13.421495       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:17:13.421505       1 config.go:200] "Starting service config controller"
	I1129 09:17:13.421524       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:17:13.421534       1 config.go:309] "Starting node config controller"
	I1129 09:17:13.421542       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:17:13.421550       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:17:13.421514       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:17:13.421582       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:17:13.521752       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:17:13.521771       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:17:13.522023       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b910bdb65bdedc5ad424106b6aea90fdb221e9c9e03ce5e62c16682d9c219dbf] <==
	I1129 09:17:10.491803       1 serving.go:386] Generated self-signed cert in-memory
	W1129 09:17:11.785019       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 09:17:11.785093       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 09:17:11.785221       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 09:17:11.785235       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 09:17:11.846940       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 09:17:11.847700       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:17:11.854624       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:17:11.854796       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:17:11.854812       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:17:11.854833       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 09:17:11.955503       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:17:19 embed-certs-160987 kubelet[735]: I1129 09:17:19.687264     735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 29 09:17:19 embed-certs-160987 kubelet[735]: I1129 09:17:19.833676     735 scope.go:117] "RemoveContainer" containerID="37aa5596c65f3b59d3ae1bddb9b6fc07f753d35a908edddbe3d59b1d949e62f9"
	Nov 29 09:17:19 embed-certs-160987 kubelet[735]: I1129 09:17:19.836139     735 scope.go:117] "RemoveContainer" containerID="0ccec968e9c37101f3ff30b51ae086a796be8e0337e2c762bcfd3794224e211f"
	Nov 29 09:17:19 embed-certs-160987 kubelet[735]: E1129 09:17:19.837212     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98zlj_kubernetes-dashboard(ca2513a0-66c7-4c57-96c0-6eaee18c65a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj" podUID="ca2513a0-66c7-4c57-96c0-6eaee18c65a9"
	Nov 29 09:17:20 embed-certs-160987 kubelet[735]: I1129 09:17:20.838683     735 scope.go:117] "RemoveContainer" containerID="37aa5596c65f3b59d3ae1bddb9b6fc07f753d35a908edddbe3d59b1d949e62f9"
	Nov 29 09:17:20 embed-certs-160987 kubelet[735]: E1129 09:17:20.839396     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98zlj_kubernetes-dashboard(ca2513a0-66c7-4c57-96c0-6eaee18c65a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj" podUID="ca2513a0-66c7-4c57-96c0-6eaee18c65a9"
	Nov 29 09:17:22 embed-certs-160987 kubelet[735]: I1129 09:17:22.855625     735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-97f9m" podStartSLOduration=1.5622196019999999 podStartE2EDuration="7.855601368s" podCreationTimestamp="2025-11-29 09:17:15 +0000 UTC" firstStartedPulling="2025-11-29 09:17:15.813149261 +0000 UTC m=+7.162417403" lastFinishedPulling="2025-11-29 09:17:22.106531027 +0000 UTC m=+13.455799169" observedRunningTime="2025-11-29 09:17:22.855462872 +0000 UTC m=+14.204731023" watchObservedRunningTime="2025-11-29 09:17:22.855601368 +0000 UTC m=+14.204869519"
	Nov 29 09:17:25 embed-certs-160987 kubelet[735]: I1129 09:17:25.400677     735 scope.go:117] "RemoveContainer" containerID="37aa5596c65f3b59d3ae1bddb9b6fc07f753d35a908edddbe3d59b1d949e62f9"
	Nov 29 09:17:25 embed-certs-160987 kubelet[735]: E1129 09:17:25.400868     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98zlj_kubernetes-dashboard(ca2513a0-66c7-4c57-96c0-6eaee18c65a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj" podUID="ca2513a0-66c7-4c57-96c0-6eaee18c65a9"
	Nov 29 09:17:35 embed-certs-160987 kubelet[735]: I1129 09:17:35.753722     735 scope.go:117] "RemoveContainer" containerID="37aa5596c65f3b59d3ae1bddb9b6fc07f753d35a908edddbe3d59b1d949e62f9"
	Nov 29 09:17:35 embed-certs-160987 kubelet[735]: I1129 09:17:35.881363     735 scope.go:117] "RemoveContainer" containerID="37aa5596c65f3b59d3ae1bddb9b6fc07f753d35a908edddbe3d59b1d949e62f9"
	Nov 29 09:17:35 embed-certs-160987 kubelet[735]: I1129 09:17:35.881676     735 scope.go:117] "RemoveContainer" containerID="587f34901e932b9466dca1fe795751ef932a318b09a3565a13718b68bad71b19"
	Nov 29 09:17:35 embed-certs-160987 kubelet[735]: E1129 09:17:35.881820     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98zlj_kubernetes-dashboard(ca2513a0-66c7-4c57-96c0-6eaee18c65a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj" podUID="ca2513a0-66c7-4c57-96c0-6eaee18c65a9"
	Nov 29 09:17:43 embed-certs-160987 kubelet[735]: I1129 09:17:43.906111     735 scope.go:117] "RemoveContainer" containerID="86f9aa5168cf43f40605f3e7fc7ef07afa72f313e7f427fb772bbccde2c8feb9"
	Nov 29 09:17:45 embed-certs-160987 kubelet[735]: I1129 09:17:45.401762     735 scope.go:117] "RemoveContainer" containerID="587f34901e932b9466dca1fe795751ef932a318b09a3565a13718b68bad71b19"
	Nov 29 09:17:45 embed-certs-160987 kubelet[735]: E1129 09:17:45.402015     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98zlj_kubernetes-dashboard(ca2513a0-66c7-4c57-96c0-6eaee18c65a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj" podUID="ca2513a0-66c7-4c57-96c0-6eaee18c65a9"
	Nov 29 09:17:56 embed-certs-160987 kubelet[735]: I1129 09:17:56.762452     735 scope.go:117] "RemoveContainer" containerID="587f34901e932b9466dca1fe795751ef932a318b09a3565a13718b68bad71b19"
	Nov 29 09:17:56 embed-certs-160987 kubelet[735]: I1129 09:17:56.940900     735 scope.go:117] "RemoveContainer" containerID="587f34901e932b9466dca1fe795751ef932a318b09a3565a13718b68bad71b19"
	Nov 29 09:17:56 embed-certs-160987 kubelet[735]: I1129 09:17:56.941115     735 scope.go:117] "RemoveContainer" containerID="f2694b730cdae9713e45775e584e5c29751a7dc48494b00ad67c3002310bbbcb"
	Nov 29 09:17:56 embed-certs-160987 kubelet[735]: E1129 09:17:56.941322     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98zlj_kubernetes-dashboard(ca2513a0-66c7-4c57-96c0-6eaee18c65a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj" podUID="ca2513a0-66c7-4c57-96c0-6eaee18c65a9"
	Nov 29 09:18:03 embed-certs-160987 kubelet[735]: I1129 09:18:03.573552     735 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 29 09:18:03 embed-certs-160987 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 09:18:03 embed-certs-160987 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 09:18:03 embed-certs-160987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 29 09:18:03 embed-certs-160987 systemd[1]: kubelet.service: Consumed 1.879s CPU time.
	
	
	==> kubernetes-dashboard [9466ff8c42bf12177809c91484f8627a7ea39bef17beb3f8f5f5fbc14b260a39] <==
	2025/11/29 09:17:22 Starting overwatch
	2025/11/29 09:17:22 Using namespace: kubernetes-dashboard
	2025/11/29 09:17:22 Using in-cluster config to connect to apiserver
	2025/11/29 09:17:22 Using secret token for csrf signing
	2025/11/29 09:17:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 09:17:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 09:17:22 Successful initial request to the apiserver, version: v1.34.1
	2025/11/29 09:17:22 Generating JWE encryption key
	2025/11/29 09:17:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 09:17:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 09:17:22 Initializing JWE encryption key from synchronized object
	2025/11/29 09:17:22 Creating in-cluster Sidecar client
	2025/11/29 09:17:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 09:17:22 Serving insecurely on HTTP port: 9090
	2025/11/29 09:17:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [029d78d32a7ea14234ad87926a9be889f2d496efc65239c68f4aed436287d272] <==
	I1129 09:17:43.966170       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:17:43.972771       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:17:43.972823       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:17:43.975043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:47.429749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:51.690505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:55.289077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:58.343062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:01.366169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:01.372527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:18:01.373536       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:18:01.373748       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-160987_dbb6caf1-1aa4-4fa2-a1e7-de25abf2cbec!
	I1129 09:18:01.373745       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9795b373-c1b1-46fc-9f5b-0328f9c89ace", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-160987_dbb6caf1-1aa4-4fa2-a1e7-de25abf2cbec became leader
	W1129 09:18:01.383666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:01.390368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:18:01.474713       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-160987_dbb6caf1-1aa4-4fa2-a1e7-de25abf2cbec!
	W1129 09:18:03.393931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:03.400042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:05.404859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:05.411636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [86f9aa5168cf43f40605f3e7fc7ef07afa72f313e7f427fb772bbccde2c8feb9] <==
	I1129 09:17:13.144720       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 09:17:43.147773       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-160987 -n embed-certs-160987
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-160987 -n embed-certs-160987: exit status 2 (345.465038ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-160987 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-160987
helpers_test.go:243: (dbg) docker inspect embed-certs-160987:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8",
	        "Created": "2025-11-29T09:15:55.293730055Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 336822,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:17:02.367334099Z",
	            "FinishedAt": "2025-11-29T09:17:00.918949985Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8/hosts",
	        "LogPath": "/var/lib/docker/containers/7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8/7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8-json.log",
	        "Name": "/embed-certs-160987",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-160987:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-160987",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7b45c51a261424d0f936d1259fe11001ab6554f9438c68d643ce706af6921dd8",
	                "LowerDir": "/var/lib/docker/overlay2/338bc42e1b80ba62e9fe902fb732aa26dedd5005037b5297154c97608cba7a83-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/338bc42e1b80ba62e9fe902fb732aa26dedd5005037b5297154c97608cba7a83/merged",
	                "UpperDir": "/var/lib/docker/overlay2/338bc42e1b80ba62e9fe902fb732aa26dedd5005037b5297154c97608cba7a83/diff",
	                "WorkDir": "/var/lib/docker/overlay2/338bc42e1b80ba62e9fe902fb732aa26dedd5005037b5297154c97608cba7a83/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-160987",
	                "Source": "/var/lib/docker/volumes/embed-certs-160987/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-160987",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-160987",
	                "name.minikube.sigs.k8s.io": "embed-certs-160987",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e8324bcf329a0f25cc97f5027d4d2be0438676e9e1ff92b80a2f2fff2536a848",
	            "SandboxKey": "/var/run/docker/netns/e8324bcf329a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-160987": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8f9ed915c5ff4babba294a5f95692de1cf5aa6f0db70276e7d083db5e7930b90",
	                    "EndpointID": "a2f69703b583f6ac1d1305e75301a3877d4819d6e5a7565a6dac2e6af7bcff44",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "2a:cd:8b:66:8a:0b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-160987",
	                        "7b45c51a2614"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-160987 -n embed-certs-160987
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-160987 -n embed-certs-160987: exit status 2 (328.120666ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-160987 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-160987 logs -n 25: (1.103129265s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-160987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-632243 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-160987 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ stop    │ -p default-k8s-diff-port-632243 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:16 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-160987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-632243 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ old-k8s-version-680646 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-680646 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ no-preload-897274 image list --format=json                                                                                                                                                                                                    │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-897274 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-020433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:18 UTC │
	│ delete  │ -p no-preload-897274                                                                                                                                                                                                                          │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ delete  │ -p no-preload-897274                                                                                                                                                                                                                          │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ default-k8s-diff-port-632243 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p default-k8s-diff-port-632243 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-020433 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ image   │ embed-certs-160987 image list --format=json                                                                                                                                                                                                   │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ pause   │ -p embed-certs-160987 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-632243                                                                                                                                                                                                               │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ stop    │ -p newest-cni-020433 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-632243                                                                                                                                                                                                               │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:17:32
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:17:32.750525  343912 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:17:32.750831  343912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:32.750854  343912 out.go:374] Setting ErrFile to fd 2...
	I1129 09:17:32.750859  343912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:32.751040  343912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:17:32.751569  343912 out.go:368] Setting JSON to false
	I1129 09:17:32.753086  343912 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3605,"bootTime":1764404248,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:17:32.753155  343912 start.go:143] virtualization: kvm guest
	I1129 09:17:32.755163  343912 out.go:179] * [newest-cni-020433] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:17:32.756656  343912 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:17:32.756692  343912 notify.go:221] Checking for updates...
	I1129 09:17:32.759425  343912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:17:32.760722  343912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:17:32.765362  343912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:17:32.766699  343912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:17:32.768011  343912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:17:32.769812  343912 config.go:182] Loaded profile config "default-k8s-diff-port-632243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.769952  343912 config.go:182] Loaded profile config "embed-certs-160987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.770081  343912 config.go:182] Loaded profile config "no-preload-897274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:32.770208  343912 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:17:32.794655  343912 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:17:32.794775  343912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:32.856269  343912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 09:17:32.845151576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:32.856388  343912 docker.go:319] overlay module found
	I1129 09:17:32.858258  343912 out.go:179] * Using the docker driver based on user configuration
	I1129 09:17:32.859415  343912 start.go:309] selected driver: docker
	I1129 09:17:32.859434  343912 start.go:927] validating driver "docker" against <nil>
	I1129 09:17:32.859451  343912 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:17:32.860352  343912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:17:32.930751  343912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 09:17:32.91839311 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:17:32.930951  343912 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1129 09:17:32.930985  343912 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1129 09:17:32.931224  343912 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:17:32.933425  343912 out.go:179] * Using Docker driver with root privileges
	I1129 09:17:32.934824  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:32.934925  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:32.934944  343912 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:17:32.935044  343912 start.go:353] cluster config:
	{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:32.936354  343912 out.go:179] * Starting "newest-cni-020433" primary control-plane node in "newest-cni-020433" cluster
	I1129 09:17:32.937514  343912 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:17:32.938803  343912 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:17:32.940016  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:32.940051  343912 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:17:32.940062  343912 cache.go:65] Caching tarball of preloaded images
	I1129 09:17:32.940107  343912 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:17:32.940163  343912 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:17:32.940176  343912 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:17:32.940278  343912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:17:32.940301  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json: {Name:mk7d4da653b0e884b27837053cd3d354c3ff76e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:32.963727  343912 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:17:32.963754  343912 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:17:32.963777  343912 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:17:32.963830  343912 start.go:360] acquireMachinesLock for newest-cni-020433: {Name:mk6347901682a01c9d317c6a402722ce1e16792e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:17:32.963998  343912 start.go:364] duration metric: took 95.455µs to acquireMachinesLock for "newest-cni-020433"
	I1129 09:17:32.964029  343912 start.go:93] Provisioning new machine with config: &{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:17:32.964128  343912 start.go:125] createHost starting for "" (driver="docker")
	W1129 09:17:33.828970  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:35.829789  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:33.948277  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:36.448064  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	I1129 09:17:32.965989  343912 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:17:32.966316  343912 start.go:159] libmachine.API.Create for "newest-cni-020433" (driver="docker")
	I1129 09:17:32.966356  343912 client.go:173] LocalClient.Create starting
	I1129 09:17:32.966470  343912 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem
	I1129 09:17:32.966524  343912 main.go:143] libmachine: Decoding PEM data...
	I1129 09:17:32.966555  343912 main.go:143] libmachine: Parsing certificate...
	I1129 09:17:32.966626  343912 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem
	I1129 09:17:32.966654  343912 main.go:143] libmachine: Decoding PEM data...
	I1129 09:17:32.966670  343912 main.go:143] libmachine: Parsing certificate...
	I1129 09:17:32.967123  343912 cli_runner.go:164] Run: docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:17:32.987734  343912 cli_runner.go:211] docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:17:32.987872  343912 network_create.go:284] running [docker network inspect newest-cni-020433] to gather additional debugging logs...
	I1129 09:17:32.987905  343912 cli_runner.go:164] Run: docker network inspect newest-cni-020433
	W1129 09:17:33.007164  343912 cli_runner.go:211] docker network inspect newest-cni-020433 returned with exit code 1
	I1129 09:17:33.007194  343912 network_create.go:287] error running [docker network inspect newest-cni-020433]: docker network inspect newest-cni-020433: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-020433 not found
	I1129 09:17:33.007209  343912 network_create.go:289] output of [docker network inspect newest-cni-020433]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-020433 not found
	
	** /stderr **
	I1129 09:17:33.007343  343912 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:17:33.027663  343912 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-94fc752bc7a7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ed:43:e0:ad:5a} reservation:<nil>}
	I1129 09:17:33.028420  343912 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-4cfc302f5d5a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:73:ac:ba:18:bb} reservation:<nil>}
	I1129 09:17:33.029339  343912 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-05a73bbe16b8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:76:a9:af:00:78:ac} reservation:<nil>}
	I1129 09:17:33.030217  343912 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb6cd0}
	I1129 09:17:33.030243  343912 network_create.go:124] attempt to create docker network newest-cni-020433 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1129 09:17:33.030303  343912 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-020433 newest-cni-020433
	I1129 09:17:33.088543  343912 network_create.go:108] docker network newest-cni-020433 192.168.76.0/24 created
	I1129 09:17:33.088582  343912 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-020433" container
	I1129 09:17:33.088651  343912 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:17:33.110031  343912 cli_runner.go:164] Run: docker volume create newest-cni-020433 --label name.minikube.sigs.k8s.io=newest-cni-020433 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:17:33.131986  343912 oci.go:103] Successfully created a docker volume newest-cni-020433
	I1129 09:17:33.132086  343912 cli_runner.go:164] Run: docker run --rm --name newest-cni-020433-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-020433 --entrypoint /usr/bin/test -v newest-cni-020433:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:17:33.542784  343912 oci.go:107] Successfully prepared a docker volume newest-cni-020433
	I1129 09:17:33.542890  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:33.542904  343912 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:17:33.542963  343912 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-020433:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1129 09:17:38.328506  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:40.827427  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:38.452229  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	W1129 09:17:40.947913  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	I1129 09:17:38.398985  343912 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-020433:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.855972089s)
	I1129 09:17:38.399017  343912 kic.go:203] duration metric: took 4.856111068s to extract preloaded images to volume ...
	W1129 09:17:38.399145  343912 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:17:38.399190  343912 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:17:38.399238  343912 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:17:38.467132  343912 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-020433 --name newest-cni-020433 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-020433 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-020433 --network newest-cni-020433 --ip 192.168.76.2 --volume newest-cni-020433:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:17:39.064807  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Running}}
	I1129 09:17:39.085951  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:17:39.108652  343912 cli_runner.go:164] Run: docker exec newest-cni-020433 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:17:39.159933  343912 oci.go:144] the created container "newest-cni-020433" has a running status.
	I1129 09:17:39.159970  343912 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa...
	I1129 09:17:39.228797  343912 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:17:39.262675  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:17:39.285576  343912 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:17:39.285600  343912 kic_runner.go:114] Args: [docker exec --privileged newest-cni-020433 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:17:39.349410  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:17:39.369689  343912 machine.go:94] provisionDockerMachine start ...
	I1129 09:17:39.369803  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:39.396522  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:39.396932  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:39.396965  343912 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:17:39.397982  343912 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1129 09:17:42.550448  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-020433
	
	I1129 09:17:42.550474  343912 ubuntu.go:182] provisioning hostname "newest-cni-020433"
	I1129 09:17:42.550527  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:42.572133  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:42.572440  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:42.572461  343912 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-020433 && echo "newest-cni-020433" | sudo tee /etc/hostname
	I1129 09:17:42.733805  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-020433
	
	I1129 09:17:42.733897  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:42.754783  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:42.755144  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:42.755173  343912 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-020433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-020433/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-020433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:17:42.901064  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:17:42.901098  343912 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:17:42.901148  343912 ubuntu.go:190] setting up certificates
	I1129 09:17:42.901161  343912 provision.go:84] configureAuth start
	I1129 09:17:42.901231  343912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:17:42.921161  343912 provision.go:143] copyHostCerts
	I1129 09:17:42.921240  343912 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:17:42.921253  343912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:17:42.921344  343912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:17:42.921497  343912 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:17:42.921509  343912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:17:42.921568  343912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:17:42.921658  343912 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:17:42.921666  343912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:17:42.921693  343912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:17:42.921761  343912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.newest-cni-020433 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-020433]
	I1129 09:17:43.032466  343912 provision.go:177] copyRemoteCerts
	I1129 09:17:43.032525  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:17:43.032558  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.052823  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.158233  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:17:43.179138  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:17:43.198311  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:17:43.217652  343912 provision.go:87] duration metric: took 316.475572ms to configureAuth
	I1129 09:17:43.217682  343912 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:17:43.217917  343912 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:43.218034  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.237980  343912 main.go:143] libmachine: Using SSH client type: native
	I1129 09:17:43.238211  343912 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1129 09:17:43.238225  343912 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:17:43.535016  343912 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:17:43.535041  343912 machine.go:97] duration metric: took 4.165320057s to provisionDockerMachine
	I1129 09:17:43.535052  343912 client.go:176] duration metric: took 10.568687757s to LocalClient.Create
	I1129 09:17:43.535073  343912 start.go:167] duration metric: took 10.568756916s to libmachine.API.Create "newest-cni-020433"
	I1129 09:17:43.535083  343912 start.go:293] postStartSetup for "newest-cni-020433" (driver="docker")
	I1129 09:17:43.535095  343912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:17:43.535160  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:17:43.535203  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.554574  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.661234  343912 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:17:43.665051  343912 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:17:43.665086  343912 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:17:43.665114  343912 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:17:43.665186  343912 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:17:43.665301  343912 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:17:43.665409  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:17:43.674165  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:43.696383  343912 start.go:296] duration metric: took 161.286243ms for postStartSetup
	I1129 09:17:43.696751  343912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:17:43.716301  343912 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:17:43.716589  343912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:17:43.716640  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.735518  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.835307  343912 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:17:43.840211  343912 start.go:128] duration metric: took 10.876067654s to createHost
	I1129 09:17:43.840237  343912 start.go:83] releasing machines lock for "newest-cni-020433", held for 10.876224942s
	I1129 09:17:43.840309  343912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:17:43.860942  343912 ssh_runner.go:195] Run: cat /version.json
	I1129 09:17:43.860995  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.861019  343912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:17:43.861110  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:17:43.881396  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:43.881825  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:17:44.035348  343912 ssh_runner.go:195] Run: systemctl --version
	I1129 09:17:44.042398  343912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:17:44.079667  343912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:17:44.084668  343912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:17:44.084747  343912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:17:44.112611  343912 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:17:44.112638  343912 start.go:496] detecting cgroup driver to use...
	I1129 09:17:44.112675  343912 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:17:44.112721  343912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:17:44.130191  343912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:17:44.143333  343912 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:17:44.143407  343912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:17:44.160522  343912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:17:44.179005  343912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:17:44.264507  343912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:17:44.361596  343912 docker.go:234] disabling docker service ...
	I1129 09:17:44.361665  343912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:17:44.385098  343912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:17:44.399261  343912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:17:44.490353  343912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:17:44.577339  343912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:17:44.590606  343912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:17:44.606040  343912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:17:44.606113  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.617850  343912 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:17:44.617930  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.627795  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.637388  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.647881  343912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:17:44.657593  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.667667  343912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.683312  343912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:17:44.693180  343912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:17:44.701299  343912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:17:44.709519  343912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:44.789707  343912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:17:44.946719  343912 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:17:44.946786  343912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:17:44.950988  343912 start.go:564] Will wait 60s for crictl version
	I1129 09:17:44.951061  343912 ssh_runner.go:195] Run: which crictl
	I1129 09:17:44.954897  343912 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:17:44.981273  343912 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:17:44.981355  343912 ssh_runner.go:195] Run: crio --version
	I1129 09:17:45.010241  343912 ssh_runner.go:195] Run: crio --version
	I1129 09:17:45.041932  343912 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:17:45.043598  343912 cli_runner.go:164] Run: docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:17:45.064493  343912 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 09:17:45.068916  343912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:45.081636  343912 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1129 09:17:43.447332  336858 pod_ready.go:104] pod "coredns-66bc5c9577-z4m7c" is not "Ready", error: <nil>
	I1129 09:17:44.449613  336858 pod_ready.go:94] pod "coredns-66bc5c9577-z4m7c" is "Ready"
	I1129 09:17:44.449647  336858 pod_ready.go:86] duration metric: took 31.007906695s for pod "coredns-66bc5c9577-z4m7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.452244  336858 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.456751  336858 pod_ready.go:94] pod "etcd-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:44.456779  336858 pod_ready.go:86] duration metric: took 4.509231ms for pod "etcd-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.458972  336858 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.464014  336858 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:44.464045  336858 pod_ready.go:86] duration metric: took 5.045626ms for pod "kube-apiserver-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.466444  336858 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.645988  336858 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:44.646021  336858 pod_ready.go:86] duration metric: took 179.551463ms for pod "kube-controller-manager-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:44.845460  336858 pod_ready.go:83] waiting for pod "kube-proxy-p2nf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.245518  336858 pod_ready.go:94] pod "kube-proxy-p2nf7" is "Ready"
	I1129 09:17:45.245548  336858 pod_ready.go:86] duration metric: took 400.053767ms for pod "kube-proxy-p2nf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.445969  336858 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.847024  336858 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-632243" is "Ready"
	I1129 09:17:45.847054  336858 pod_ready.go:86] duration metric: took 401.056115ms for pod "kube-scheduler-default-k8s-diff-port-632243" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:45.847067  336858 pod_ready.go:40] duration metric: took 32.409409019s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:45.894722  336858 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:17:45.896514  336858 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-632243" cluster and "default" namespace by default
	W1129 09:17:42.828310  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:44.828378  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	I1129 09:17:45.082734  343912 kubeadm.go:884] updating cluster {Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:17:45.082902  343912 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:17:45.082966  343912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:45.116711  343912 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:45.116737  343912 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:17:45.116794  343912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:17:45.143455  343912 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:17:45.143477  343912 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:17:45.143484  343912 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 09:17:45.143562  343912 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-020433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:17:45.143624  343912 ssh_runner.go:195] Run: crio config
	I1129 09:17:45.191199  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:45.191226  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:45.191244  343912 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1129 09:17:45.191264  343912 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-020433 NodeName:newest-cni-020433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:17:45.191372  343912 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-020433"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:17:45.191438  343912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:17:45.199969  343912 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:17:45.200043  343912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:17:45.208777  343912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 09:17:45.222978  343912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:17:45.238915  343912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1129 09:17:45.253505  343912 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:17:45.257546  343912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:17:45.269034  343912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:17:45.354518  343912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:17:45.382355  343912 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433 for IP: 192.168.76.2
	I1129 09:17:45.382379  343912 certs.go:195] generating shared ca certs ...
	I1129 09:17:45.382407  343912 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.382577  343912 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:17:45.382636  343912 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:17:45.382650  343912 certs.go:257] generating profile certs ...
	I1129 09:17:45.382718  343912 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key
	I1129 09:17:45.382739  343912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.crt with IP's: []
	I1129 09:17:45.531926  343912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.crt ...
	I1129 09:17:45.531957  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.crt: {Name:mkeb17feaf8ba6750a01bd0a1f0441d4154bc65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.532140  343912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key ...
	I1129 09:17:45.532151  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key: {Name:mke1454a7dc3fbfdd29bdb836050690bcbb7394e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.532230  343912 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70
	I1129 09:17:45.532247  343912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1129 09:17:45.624876  343912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70 ...
	I1129 09:17:45.624908  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70: {Name:mk7ef25787741e084b6a866e43c94e1e8fef637a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.625077  343912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70 ...
	I1129 09:17:45.625090  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70: {Name:mk1ecd69640eeb4a11bb5f1e1ff7ab99459cb558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.625222  343912 certs.go:382] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt.22e84c70 -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt
	I1129 09:17:45.625303  343912 certs.go:386] copying /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70 -> /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key
	I1129 09:17:45.625381  343912 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key
	I1129 09:17:45.625401  343912 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt with IP's: []
	I1129 09:17:45.648826  343912 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt ...
	I1129 09:17:45.648864  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt: {Name:mk66c6222d92d3d2bb033717f49fc6858d0a9367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.649040  343912 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key ...
	I1129 09:17:45.649052  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key: {Name:mk559719a3cba034552025e578cadb28054704f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:17:45.649223  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:17:45.649259  343912 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:17:45.649269  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:17:45.649291  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:17:45.649314  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:17:45.649337  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:17:45.649376  343912 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:17:45.649920  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:17:45.669435  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:17:45.688777  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:17:45.707612  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:17:45.726954  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:17:45.745570  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 09:17:45.763773  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:17:45.781717  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:17:45.799936  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:17:45.820108  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:17:45.839214  343912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:17:45.859643  343912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:17:45.874007  343912 ssh_runner.go:195] Run: openssl version
	I1129 09:17:45.880775  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:17:45.890438  343912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:17:45.894494  343912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:17:45.894554  343912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:17:45.934499  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:17:45.944013  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:17:45.953676  343912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:17:45.957999  343912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:17:45.958047  343912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:17:45.998219  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:17:46.008105  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:17:46.018512  343912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:46.022778  343912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:46.022855  343912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:17:46.060278  343912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:17:46.069685  343912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:17:46.073627  343912 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:17:46.073677  343912 kubeadm.go:401] StartCluster: {Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:17:46.073751  343912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:17:46.073796  343912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:17:46.102729  343912 cri.go:89] found id: ""
	I1129 09:17:46.102806  343912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:17:46.111499  343912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:17:46.120045  343912 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:17:46.120110  343912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:17:46.128326  343912 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:17:46.128366  343912 kubeadm.go:158] found existing configuration files:
	
	I1129 09:17:46.128413  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:17:46.136677  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:17:46.136741  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:17:46.144727  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:17:46.152908  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:17:46.152971  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:17:46.161300  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:17:46.170050  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:17:46.170117  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:17:46.179094  343912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:17:46.190258  343912 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:17:46.190325  343912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:17:46.200333  343912 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:17:46.284775  343912 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 09:17:46.350549  343912 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1129 09:17:47.327775  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	W1129 09:17:49.327943  336547 pod_ready.go:104] pod "coredns-66bc5c9577-ptx67" is not "Ready", error: <nil>
	I1129 09:17:49.827724  336547 pod_ready.go:94] pod "coredns-66bc5c9577-ptx67" is "Ready"
	I1129 09:17:49.827757  336547 pod_ready.go:86] duration metric: took 36.505830154s for pod "coredns-66bc5c9577-ptx67" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.830193  336547 pod_ready.go:83] waiting for pod "etcd-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.834087  336547 pod_ready.go:94] pod "etcd-embed-certs-160987" is "Ready"
	I1129 09:17:49.834117  336547 pod_ready.go:86] duration metric: took 3.892584ms for pod "etcd-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.836236  336547 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.840124  336547 pod_ready.go:94] pod "kube-apiserver-embed-certs-160987" is "Ready"
	I1129 09:17:49.840148  336547 pod_ready.go:86] duration metric: took 3.889352ms for pod "kube-apiserver-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:49.842042  336547 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.026423  336547 pod_ready.go:94] pod "kube-controller-manager-embed-certs-160987" is "Ready"
	I1129 09:17:50.026453  336547 pod_ready.go:86] duration metric: took 184.390727ms for pod "kube-controller-manager-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.225618  336547 pod_ready.go:83] waiting for pod "kube-proxy-57l9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.626123  336547 pod_ready.go:94] pod "kube-proxy-57l9h" is "Ready"
	I1129 09:17:50.626149  336547 pod_ready.go:86] duration metric: took 400.500945ms for pod "kube-proxy-57l9h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:50.826449  336547 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:51.226295  336547 pod_ready.go:94] pod "kube-scheduler-embed-certs-160987" is "Ready"
	I1129 09:17:51.226329  336547 pod_ready.go:86] duration metric: took 399.854281ms for pod "kube-scheduler-embed-certs-160987" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:17:51.226346  336547 pod_ready.go:40] duration metric: took 37.909395781s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:17:51.285055  336547 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:17:51.286778  336547 out.go:179] * Done! kubectl is now configured to use "embed-certs-160987" cluster and "default" namespace by default
	I1129 09:17:56.491067  343912 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:17:56.491128  343912 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:17:56.491204  343912 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:17:56.491252  343912 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:17:56.491321  343912 kubeadm.go:319] OS: Linux
	I1129 09:17:56.491400  343912 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:17:56.491441  343912 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:17:56.491502  343912 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:17:56.491558  343912 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:17:56.491602  343912 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:17:56.491642  343912 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:17:56.491683  343912 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:17:56.491733  343912 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:17:56.491834  343912 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:17:56.491984  343912 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:17:56.492110  343912 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:17:56.492184  343912 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:17:56.493947  343912 out.go:252]   - Generating certificates and keys ...
	I1129 09:17:56.494037  343912 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:17:56.494134  343912 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:17:56.494235  343912 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:17:56.494315  343912 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:17:56.494392  343912 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:17:56.494466  343912 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:17:56.494546  343912 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:17:56.494718  343912 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-020433] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:17:56.494781  343912 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:17:56.494923  343912 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-020433] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:17:56.495006  343912 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:17:56.495078  343912 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:17:56.495157  343912 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:17:56.495234  343912 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:17:56.495280  343912 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:17:56.495370  343912 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:17:56.495457  343912 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:17:56.495570  343912 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:17:56.495624  343912 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:17:56.495696  343912 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:17:56.495760  343912 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:17:56.497322  343912 out.go:252]   - Booting up control plane ...
	I1129 09:17:56.497460  343912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:17:56.497563  343912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:17:56.497652  343912 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:17:56.497741  343912 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:17:56.497818  343912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:17:56.497976  343912 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:17:56.498111  343912 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:17:56.498169  343912 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:17:56.498335  343912 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:17:56.498461  343912 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:17:56.498530  343912 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.935954ms
	I1129 09:17:56.498616  343912 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:17:56.498731  343912 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1129 09:17:56.498879  343912 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:17:56.498988  343912 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 09:17:56.499073  343912 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.504475511s
	I1129 09:17:56.499172  343912 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.695464789s
	I1129 09:17:56.499266  343912 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501879872s
	I1129 09:17:56.499440  343912 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:17:56.499624  343912 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:17:56.499691  343912 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:17:56.500020  343912 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-020433 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:17:56.500135  343912 kubeadm.go:319] [bootstrap-token] Using token: f82gs2.l4bciq1r030lvxp0
	I1129 09:17:56.501325  343912 out.go:252]   - Configuring RBAC rules ...
	I1129 09:17:56.501453  343912 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:17:56.501553  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:17:56.501684  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:17:56.501866  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:17:56.502025  343912 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:17:56.502108  343912 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:17:56.502227  343912 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:17:56.502273  343912 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:17:56.502315  343912 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:17:56.502321  343912 kubeadm.go:319] 
	I1129 09:17:56.502376  343912 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:17:56.502381  343912 kubeadm.go:319] 
	I1129 09:17:56.502451  343912 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:17:56.502460  343912 kubeadm.go:319] 
	I1129 09:17:56.502481  343912 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:17:56.502532  343912 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:17:56.502576  343912 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:17:56.502586  343912 kubeadm.go:319] 
	I1129 09:17:56.502629  343912 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:17:56.502639  343912 kubeadm.go:319] 
	I1129 09:17:56.502689  343912 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:17:56.502697  343912 kubeadm.go:319] 
	I1129 09:17:56.502745  343912 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:17:56.502810  343912 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:17:56.502890  343912 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:17:56.502897  343912 kubeadm.go:319] 
	I1129 09:17:56.502971  343912 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:17:56.503057  343912 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:17:56.503064  343912 kubeadm.go:319] 
	I1129 09:17:56.503140  343912 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token f82gs2.l4bciq1r030lvxp0 \
	I1129 09:17:56.503224  343912 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 \
	I1129 09:17:56.503244  343912 kubeadm.go:319] 	--control-plane 
	I1129 09:17:56.503252  343912 kubeadm.go:319] 
	I1129 09:17:56.503335  343912 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:17:56.503344  343912 kubeadm.go:319] 
	I1129 09:17:56.503417  343912 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token f82gs2.l4bciq1r030lvxp0 \
	I1129 09:17:56.503523  343912 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:c67f487f8861b4229187cb5510054720af943291cf02c59a2b23e487361f58e2 
	I1129 09:17:56.503547  343912 cni.go:84] Creating CNI manager for ""
	I1129 09:17:56.503557  343912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:17:56.504793  343912 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:17:56.505922  343912 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:17:56.510364  343912 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 09:17:56.510383  343912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:17:56.523891  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:17:56.771723  343912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:17:56.771759  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:56.771857  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-020433 minikube.k8s.io/updated_at=2025_11_29T09_17_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=newest-cni-020433 minikube.k8s.io/primary=true
	I1129 09:17:56.870386  343912 ops.go:34] apiserver oom_adj: -16
	I1129 09:17:56.870493  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:57.370894  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:57.870685  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:58.370644  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:58.870909  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:59.371245  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:17:59.871577  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:00.370624  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:00.871043  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:01.370798  343912 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:18:01.470229  343912 kubeadm.go:1114] duration metric: took 4.69851702s to wait for elevateKubeSystemPrivileges
	I1129 09:18:01.470353  343912 kubeadm.go:403] duration metric: took 15.396675728s to StartCluster
	I1129 09:18:01.470403  343912 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:01.470526  343912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:18:01.473161  343912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:01.473501  343912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:18:01.473529  343912 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:18:01.473595  343912 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-020433"
	I1129 09:18:01.473611  343912 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-020433"
	I1129 09:18:01.473639  343912 host.go:66] Checking if "newest-cni-020433" exists ...
	I1129 09:18:01.473786  343912 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:01.473872  343912 addons.go:70] Setting default-storageclass=true in profile "newest-cni-020433"
	I1129 09:18:01.473890  343912 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-020433"
	I1129 09:18:01.473505  343912 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:18:01.474234  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:01.474263  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:01.477129  343912 out.go:179] * Verifying Kubernetes components...
	I1129 09:18:01.478510  343912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:18:01.508488  343912 addons.go:239] Setting addon default-storageclass=true in "newest-cni-020433"
	I1129 09:18:01.508544  343912 host.go:66] Checking if "newest-cni-020433" exists ...
	I1129 09:18:01.509017  343912 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:01.512765  343912 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:18:01.513878  343912 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:18:01.513901  343912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:18:01.513969  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:01.548536  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:01.549743  343912 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:18:01.549766  343912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:18:01.549824  343912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:01.577630  343912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:01.603306  343912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:18:01.652699  343912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:18:01.679084  343912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:18:01.710552  343912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:18:01.806299  343912 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1129 09:18:01.808103  343912 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:18:01.808185  343912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:18:02.050418  343912 api_server.go:72] duration metric: took 576.481112ms to wait for apiserver process to appear ...
	I1129 09:18:02.050443  343912 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:18:02.050462  343912 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:18:02.057555  343912 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:18:02.058665  343912 api_server.go:141] control plane version: v1.34.1
	I1129 09:18:02.058689  343912 api_server.go:131] duration metric: took 8.238938ms to wait for apiserver health ...
	I1129 09:18:02.058698  343912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:18:02.059528  343912 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 09:18:02.062186  343912 addons.go:530] duration metric: took 588.650166ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:18:02.062399  343912 system_pods.go:59] 8 kube-system pods found
	I1129 09:18:02.062440  343912 system_pods.go:61] "coredns-66bc5c9577-h8nqv" [c8cbc934-0df3-44c5-a3d7-fff7ca54ef86] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 09:18:02.062454  343912 system_pods.go:61] "etcd-newest-cni-020433" [47991984-6243-463b-9cda-95d0e18b6092] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:18:02.062465  343912 system_pods.go:61] "kindnet-gxgwn" [7e13d750-7bcf-4e2a-9663-512ecc23781a] Running
	I1129 09:18:02.062474  343912 system_pods.go:61] "kube-apiserver-newest-cni-020433" [20641eff-ff31-4e31-8983-1075116bcdd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:18:02.062488  343912 system_pods.go:61] "kube-controller-manager-newest-cni-020433" [f5bece62-e41a-4cf6-bacc-29d4dd0754cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:18:02.062494  343912 system_pods.go:61] "kube-proxy-nqwzp" [118d6bdc-5c33-4ab5-bee8-6f8a3447c461] Running
	I1129 09:18:02.062507  343912 system_pods.go:61] "kube-scheduler-newest-cni-020433" [3224b587-95a1-4963-88ae-af38a3bd1d84] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:18:02.062523  343912 system_pods.go:61] "storage-provisioner" [30a16c03-a054-435c-8eec-ce64486eb6c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 09:18:02.062532  343912 system_pods.go:74] duration metric: took 3.827683ms to wait for pod list to return data ...
	I1129 09:18:02.062545  343912 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:18:02.065169  343912 default_sa.go:45] found service account: "default"
	I1129 09:18:02.065192  343912 default_sa.go:55] duration metric: took 2.641298ms for default service account to be created ...
	I1129 09:18:02.065203  343912 kubeadm.go:587] duration metric: took 591.270549ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:18:02.065217  343912 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:18:02.068226  343912 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:18:02.068253  343912 node_conditions.go:123] node cpu capacity is 8
	I1129 09:18:02.068266  343912 node_conditions.go:105] duration metric: took 3.045433ms to run NodePressure ...
	I1129 09:18:02.068278  343912 start.go:242] waiting for startup goroutines ...
	I1129 09:18:02.310908  343912 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-020433" context rescaled to 1 replicas
	I1129 09:18:02.310945  343912 start.go:247] waiting for cluster config update ...
	I1129 09:18:02.310956  343912 start.go:256] writing updated cluster config ...
	I1129 09:18:02.311264  343912 ssh_runner.go:195] Run: rm -f paused
	I1129 09:18:02.364750  343912 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:18:02.367739  343912 out.go:179] * Done! kubectl is now configured to use "newest-cni-020433" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 09:17:35 embed-certs-160987 crio[572]: time="2025-11-29T09:17:35.811632916Z" level=info msg="Started container" PID=1766 containerID=587f34901e932b9466dca1fe795751ef932a318b09a3565a13718b68bad71b19 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj/dashboard-metrics-scraper id=5d9813dd-4e54-43e9-90ec-44112f0dff2e name=/runtime.v1.RuntimeService/StartContainer sandboxID=b83de4b5933becfff6eb105dfbd3f53faebe7bb718751bd1ff867871c0486f9e
	Nov 29 09:17:35 embed-certs-160987 crio[572]: time="2025-11-29T09:17:35.883158776Z" level=info msg="Removing container: 37aa5596c65f3b59d3ae1bddb9b6fc07f753d35a908edddbe3d59b1d949e62f9" id=87a05e0d-55c3-4366-9630-be0f445243fc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:35 embed-certs-160987 crio[572]: time="2025-11-29T09:17:35.8955723Z" level=info msg="Removed container 37aa5596c65f3b59d3ae1bddb9b6fc07f753d35a908edddbe3d59b1d949e62f9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj/dashboard-metrics-scraper" id=87a05e0d-55c3-4366-9630-be0f445243fc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.906462136Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b17b2e75-05b5-457f-9f79-5194f067cbaa name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.907371825Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=df1a1304-e2a1-4bb6-a784-f2bce87b215b name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.908499853Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=309d4bba-e05a-4e08-becd-570d8be6213e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.90862294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.913442576Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.913644908Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e5acf427db1c00ef6752614fc1b17a91d751a00cb901896211603ee35c0c540d/merged/etc/passwd: no such file or directory"
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.913699497Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e5acf427db1c00ef6752614fc1b17a91d751a00cb901896211603ee35c0c540d/merged/etc/group: no such file or directory"
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.91401802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.951156462Z" level=info msg="Created container 029d78d32a7ea14234ad87926a9be889f2d496efc65239c68f4aed436287d272: kube-system/storage-provisioner/storage-provisioner" id=309d4bba-e05a-4e08-becd-570d8be6213e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.951838275Z" level=info msg="Starting container: 029d78d32a7ea14234ad87926a9be889f2d496efc65239c68f4aed436287d272" id=9472dcce-1d34-4256-b62c-e6f8e358be56 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:43 embed-certs-160987 crio[572]: time="2025-11-29T09:17:43.954142801Z" level=info msg="Started container" PID=1780 containerID=029d78d32a7ea14234ad87926a9be889f2d496efc65239c68f4aed436287d272 description=kube-system/storage-provisioner/storage-provisioner id=9472dcce-1d34-4256-b62c-e6f8e358be56 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b6874fa8bf04e06691a7ad263dd2997d2c2202c554a48668b6b58753e0910805
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.762980469Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7d72eec4-7186-49aa-92a0-43abc4f3d756 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.764099714Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cc9ab8e5-b9a4-47b1-8e21-1b2387d809d9 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.76533776Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj/dashboard-metrics-scraper" id=cbb8161b-1066-4b21-a8b1-ddccbf390546 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.765511239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.771517461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.772216967Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.814511135Z" level=info msg="Created container f2694b730cdae9713e45775e584e5c29751a7dc48494b00ad67c3002310bbbcb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj/dashboard-metrics-scraper" id=cbb8161b-1066-4b21-a8b1-ddccbf390546 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.815706631Z" level=info msg="Starting container: f2694b730cdae9713e45775e584e5c29751a7dc48494b00ad67c3002310bbbcb" id=f6e4d103-9c0d-49c0-bb04-8116f4727d98 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.818248274Z" level=info msg="Started container" PID=1816 containerID=f2694b730cdae9713e45775e584e5c29751a7dc48494b00ad67c3002310bbbcb description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj/dashboard-metrics-scraper id=f6e4d103-9c0d-49c0-bb04-8116f4727d98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b83de4b5933becfff6eb105dfbd3f53faebe7bb718751bd1ff867871c0486f9e
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.942274588Z" level=info msg="Removing container: 587f34901e932b9466dca1fe795751ef932a318b09a3565a13718b68bad71b19" id=ad756c45-a2b6-4276-a00b-aa9313c1a260 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 29 09:17:56 embed-certs-160987 crio[572]: time="2025-11-29T09:17:56.952255178Z" level=info msg="Removed container 587f34901e932b9466dca1fe795751ef932a318b09a3565a13718b68bad71b19: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj/dashboard-metrics-scraper" id=ad756c45-a2b6-4276-a00b-aa9313c1a260 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f2694b730cdae       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   b83de4b5933be       dashboard-metrics-scraper-6ffb444bf9-98zlj   kubernetes-dashboard
	029d78d32a7ea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   b6874fa8bf04e       storage-provisioner                          kube-system
	9466ff8c42bf1       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   399d73db81e98       kubernetes-dashboard-855c9754f9-97f9m        kubernetes-dashboard
	b1168c95c222d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   1fc2833d9c797       coredns-66bc5c9577-ptx67                     kube-system
	b086130dc9200       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   82f73c2cc547a       busybox                                      default
	00c1cfc1e5404       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   69a59b9356597       kube-proxy-57l9h                             kube-system
	86f9aa5168cf4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   b6874fa8bf04e       storage-provisioner                          kube-system
	668d944505877       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   4b714f98404d9       kindnet-cvmj6                                kube-system
	b910bdb65bded       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   2dc3587a12637       kube-scheduler-embed-certs-160987            kube-system
	d40c506138259       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   85ae4f4143ba7       kube-controller-manager-embed-certs-160987   kube-system
	6ee1a1cef6abf       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   bb3d9b96a42ac       kube-apiserver-embed-certs-160987            kube-system
	062c767d0f027       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   7cd248abc3317       etcd-embed-certs-160987                      kube-system
	
	
	==> coredns [b1168c95c222d7ead90133fcf186480b098d18439b28dae77675b1df0317dc77] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57185 - 28717 "HINFO IN 2815726449020211220.1603990658617242147. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.038845393s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-160987
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-160987
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=embed-certs-160987
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_16_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:16:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-160987
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:18:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:17:42 +0000   Sat, 29 Nov 2025 09:16:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-160987
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                01febc21-6293-4ce5-852c-5d2b1b91b577
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-ptx67                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-160987                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-cvmj6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-160987             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-embed-certs-160987    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-57l9h                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-160987             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-98zlj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-97f9m         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node embed-certs-160987 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node embed-certs-160987 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node embed-certs-160987 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           110s               node-controller  Node embed-certs-160987 event: Registered Node embed-certs-160987 in Controller
	  Normal  NodeReady                98s                kubelet          Node embed-certs-160987 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node embed-certs-160987 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node embed-certs-160987 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node embed-certs-160987 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node embed-certs-160987 event: Registered Node embed-certs-160987 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 e1 87 e4 84 ae 08 06
	[  +0.002640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[ +14.778105] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +13.520989] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 32 aa eb 52 77 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +14.762906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df ae 93 da f6 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[Nov29 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[ +13.297626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	[  +7.390906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 68 f3 3d 3c e9 08 06
	[  +0.000436] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[  +7.145922] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea c3 2f 2a a3 01 08 06
	[  +0.000415] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	
	
	==> etcd [062c767d0f027b4b3689a35cad7c6003a28dac146ef6a6e9732382f36ec71ffa] <==
	{"level":"warn","ts":"2025-11-29T09:17:10.843095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.852521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.864697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.875321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.889932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.903570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.913203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.922869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.942674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.953278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.964371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.975672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.983549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:10.993018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.012626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.023202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.035243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.048294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.058477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.069458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.082444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.092376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.103152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.110991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:17:11.196574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34398","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:08 up  1:00,  0 user,  load average: 2.97, 3.66, 2.49
	Linux embed-certs-160987 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [668d94450587744ba0fea9e8fca8a95da8eb1024372b15cdc4023f63e16b8f81] <==
	I1129 09:17:13.375969       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:17:13.376243       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1129 09:17:13.376426       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:17:13.376442       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:17:13.376463       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:17:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:17:13.581939       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:17:13.581960       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:17:13.581968       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:17:13.582241       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:17:14.013869       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:17:14.013989       1 metrics.go:72] Registering metrics
	I1129 09:17:14.014359       1 controller.go:711] "Syncing nftables rules"
	I1129 09:17:23.581566       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:17:23.581669       1 main.go:301] handling current node
	I1129 09:17:33.585976       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:17:33.586037       1 main.go:301] handling current node
	I1129 09:17:43.582553       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:17:43.582588       1 main.go:301] handling current node
	I1129 09:17:53.582199       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:17:53.582242       1 main.go:301] handling current node
	I1129 09:18:03.586933       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1129 09:18:03.586979       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6ee1a1cef6abf99fe2be4154d33fa7e55335140b3c9fc7c979eabca17e682341] <==
	I1129 09:17:11.869197       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:17:11.869240       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1129 09:17:11.872869       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 09:17:11.873199       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 09:17:11.873282       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 09:17:11.873262       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1129 09:17:11.896724       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 09:17:11.909168       1 cache.go:39] Caches are synced for autoregister controller
	I1129 09:17:11.909380       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 09:17:11.910446       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1129 09:17:11.910564       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:17:11.925184       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 09:17:11.933536       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:17:12.472981       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:17:12.517399       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:17:12.560981       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:17:12.577197       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:17:12.591506       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:17:12.664504       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.74.233"}
	I1129 09:17:12.680519       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.177.131"}
	I1129 09:17:12.772520       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:17:15.253649       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:17:15.253693       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:17:15.355351       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:17:15.402722       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d40c5061382593cad885d4b3c86be7a3641ec567ffe3cb652cfd84dd0c2396bf] <==
	I1129 09:17:14.839644       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:17:14.843978       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:17:14.846325       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:17:14.849644       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:17:14.849673       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 09:17:14.849679       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:17:14.849717       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:17:14.849736       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:17:14.849750       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:17:14.849760       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 09:17:14.849765       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:17:14.849776       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 09:17:14.849800       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:17:14.849917       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 09:17:14.849935       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-160987"
	I1129 09:17:14.849980       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1129 09:17:14.850109       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 09:17:14.851374       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 09:17:14.851443       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 09:17:14.854030       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:17:14.854035       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:17:14.856026       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:17:14.857109       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:17:14.859401       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 09:17:14.876889       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [00c1cfc1e5404b627c278cb3aa524243f84e0940022fa9856a40f2180118e3da] <==
	I1129 09:17:13.213345       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:17:13.290090       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:17:13.390695       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:17:13.390744       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1129 09:17:13.390895       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:17:13.413270       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:17:13.413361       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:17:13.419545       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:17:13.420082       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:17:13.420127       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:17:13.421456       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:17:13.421495       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:17:13.421505       1 config.go:200] "Starting service config controller"
	I1129 09:17:13.421524       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:17:13.421534       1 config.go:309] "Starting node config controller"
	I1129 09:17:13.421542       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:17:13.421550       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:17:13.421514       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:17:13.421582       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:17:13.521752       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:17:13.521771       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:17:13.522023       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b910bdb65bdedc5ad424106b6aea90fdb221e9c9e03ce5e62c16682d9c219dbf] <==
	I1129 09:17:10.491803       1 serving.go:386] Generated self-signed cert in-memory
	W1129 09:17:11.785019       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 09:17:11.785093       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 09:17:11.785221       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 09:17:11.785235       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 09:17:11.846940       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 09:17:11.847700       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:17:11.854624       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:17:11.854796       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:17:11.854812       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:17:11.854833       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 09:17:11.955503       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:17:19 embed-certs-160987 kubelet[735]: I1129 09:17:19.687264     735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 29 09:17:19 embed-certs-160987 kubelet[735]: I1129 09:17:19.833676     735 scope.go:117] "RemoveContainer" containerID="37aa5596c65f3b59d3ae1bddb9b6fc07f753d35a908edddbe3d59b1d949e62f9"
	Nov 29 09:17:19 embed-certs-160987 kubelet[735]: I1129 09:17:19.836139     735 scope.go:117] "RemoveContainer" containerID="0ccec968e9c37101f3ff30b51ae086a796be8e0337e2c762bcfd3794224e211f"
	Nov 29 09:17:19 embed-certs-160987 kubelet[735]: E1129 09:17:19.837212     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98zlj_kubernetes-dashboard(ca2513a0-66c7-4c57-96c0-6eaee18c65a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj" podUID="ca2513a0-66c7-4c57-96c0-6eaee18c65a9"
	Nov 29 09:17:20 embed-certs-160987 kubelet[735]: I1129 09:17:20.838683     735 scope.go:117] "RemoveContainer" containerID="37aa5596c65f3b59d3ae1bddb9b6fc07f753d35a908edddbe3d59b1d949e62f9"
	Nov 29 09:17:20 embed-certs-160987 kubelet[735]: E1129 09:17:20.839396     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98zlj_kubernetes-dashboard(ca2513a0-66c7-4c57-96c0-6eaee18c65a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj" podUID="ca2513a0-66c7-4c57-96c0-6eaee18c65a9"
	Nov 29 09:17:22 embed-certs-160987 kubelet[735]: I1129 09:17:22.855625     735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-97f9m" podStartSLOduration=1.5622196019999999 podStartE2EDuration="7.855601368s" podCreationTimestamp="2025-11-29 09:17:15 +0000 UTC" firstStartedPulling="2025-11-29 09:17:15.813149261 +0000 UTC m=+7.162417403" lastFinishedPulling="2025-11-29 09:17:22.106531027 +0000 UTC m=+13.455799169" observedRunningTime="2025-11-29 09:17:22.855462872 +0000 UTC m=+14.204731023" watchObservedRunningTime="2025-11-29 09:17:22.855601368 +0000 UTC m=+14.204869519"
	Nov 29 09:17:25 embed-certs-160987 kubelet[735]: I1129 09:17:25.400677     735 scope.go:117] "RemoveContainer" containerID="37aa5596c65f3b59d3ae1bddb9b6fc07f753d35a908edddbe3d59b1d949e62f9"
	Nov 29 09:17:25 embed-certs-160987 kubelet[735]: E1129 09:17:25.400868     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98zlj_kubernetes-dashboard(ca2513a0-66c7-4c57-96c0-6eaee18c65a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj" podUID="ca2513a0-66c7-4c57-96c0-6eaee18c65a9"
	Nov 29 09:17:35 embed-certs-160987 kubelet[735]: I1129 09:17:35.753722     735 scope.go:117] "RemoveContainer" containerID="37aa5596c65f3b59d3ae1bddb9b6fc07f753d35a908edddbe3d59b1d949e62f9"
	Nov 29 09:17:35 embed-certs-160987 kubelet[735]: I1129 09:17:35.881363     735 scope.go:117] "RemoveContainer" containerID="37aa5596c65f3b59d3ae1bddb9b6fc07f753d35a908edddbe3d59b1d949e62f9"
	Nov 29 09:17:35 embed-certs-160987 kubelet[735]: I1129 09:17:35.881676     735 scope.go:117] "RemoveContainer" containerID="587f34901e932b9466dca1fe795751ef932a318b09a3565a13718b68bad71b19"
	Nov 29 09:17:35 embed-certs-160987 kubelet[735]: E1129 09:17:35.881820     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98zlj_kubernetes-dashboard(ca2513a0-66c7-4c57-96c0-6eaee18c65a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj" podUID="ca2513a0-66c7-4c57-96c0-6eaee18c65a9"
	Nov 29 09:17:43 embed-certs-160987 kubelet[735]: I1129 09:17:43.906111     735 scope.go:117] "RemoveContainer" containerID="86f9aa5168cf43f40605f3e7fc7ef07afa72f313e7f427fb772bbccde2c8feb9"
	Nov 29 09:17:45 embed-certs-160987 kubelet[735]: I1129 09:17:45.401762     735 scope.go:117] "RemoveContainer" containerID="587f34901e932b9466dca1fe795751ef932a318b09a3565a13718b68bad71b19"
	Nov 29 09:17:45 embed-certs-160987 kubelet[735]: E1129 09:17:45.402015     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98zlj_kubernetes-dashboard(ca2513a0-66c7-4c57-96c0-6eaee18c65a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj" podUID="ca2513a0-66c7-4c57-96c0-6eaee18c65a9"
	Nov 29 09:17:56 embed-certs-160987 kubelet[735]: I1129 09:17:56.762452     735 scope.go:117] "RemoveContainer" containerID="587f34901e932b9466dca1fe795751ef932a318b09a3565a13718b68bad71b19"
	Nov 29 09:17:56 embed-certs-160987 kubelet[735]: I1129 09:17:56.940900     735 scope.go:117] "RemoveContainer" containerID="587f34901e932b9466dca1fe795751ef932a318b09a3565a13718b68bad71b19"
	Nov 29 09:17:56 embed-certs-160987 kubelet[735]: I1129 09:17:56.941115     735 scope.go:117] "RemoveContainer" containerID="f2694b730cdae9713e45775e584e5c29751a7dc48494b00ad67c3002310bbbcb"
	Nov 29 09:17:56 embed-certs-160987 kubelet[735]: E1129 09:17:56.941322     735 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98zlj_kubernetes-dashboard(ca2513a0-66c7-4c57-96c0-6eaee18c65a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98zlj" podUID="ca2513a0-66c7-4c57-96c0-6eaee18c65a9"
	Nov 29 09:18:03 embed-certs-160987 kubelet[735]: I1129 09:18:03.573552     735 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 29 09:18:03 embed-certs-160987 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 09:18:03 embed-certs-160987 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 09:18:03 embed-certs-160987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 29 09:18:03 embed-certs-160987 systemd[1]: kubelet.service: Consumed 1.879s CPU time.
	
	
	==> kubernetes-dashboard [9466ff8c42bf12177809c91484f8627a7ea39bef17beb3f8f5f5fbc14b260a39] <==
	2025/11/29 09:17:22 Using namespace: kubernetes-dashboard
	2025/11/29 09:17:22 Using in-cluster config to connect to apiserver
	2025/11/29 09:17:22 Using secret token for csrf signing
	2025/11/29 09:17:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/29 09:17:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/29 09:17:22 Successful initial request to the apiserver, version: v1.34.1
	2025/11/29 09:17:22 Generating JWE encryption key
	2025/11/29 09:17:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/29 09:17:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/29 09:17:22 Initializing JWE encryption key from synchronized object
	2025/11/29 09:17:22 Creating in-cluster Sidecar client
	2025/11/29 09:17:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 09:17:22 Serving insecurely on HTTP port: 9090
	2025/11/29 09:17:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/29 09:17:22 Starting overwatch
	
	
	==> storage-provisioner [029d78d32a7ea14234ad87926a9be889f2d496efc65239c68f4aed436287d272] <==
	I1129 09:17:43.966170       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:17:43.972771       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:17:43.972823       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:17:43.975043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:47.429749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:51.690505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:55.289077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:17:58.343062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:01.366169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:01.372527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:18:01.373536       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:18:01.373748       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-160987_dbb6caf1-1aa4-4fa2-a1e7-de25abf2cbec!
	I1129 09:18:01.373745       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9795b373-c1b1-46fc-9f5b-0328f9c89ace", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-160987_dbb6caf1-1aa4-4fa2-a1e7-de25abf2cbec became leader
	W1129 09:18:01.383666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:01.390368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:18:01.474713       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-160987_dbb6caf1-1aa4-4fa2-a1e7-de25abf2cbec!
	W1129 09:18:03.393931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:03.400042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:05.404859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:05.411636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:07.415040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:18:07.420049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [86f9aa5168cf43f40605f3e7fc7ef07afa72f313e7f427fb772bbccde2c8feb9] <==
	I1129 09:17:13.144720       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1129 09:17:43.147773       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-160987 -n embed-certs-160987
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-160987 -n embed-certs-160987: exit status 2 (337.446573ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-160987 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-020433 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-020433 --alsologtostderr -v=1: exit status 80 (2.485698156s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-020433 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:18:33.713138  356651 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:18:33.713444  356651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:33.713454  356651 out.go:374] Setting ErrFile to fd 2...
	I1129 09:18:33.713458  356651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:33.713661  356651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:18:33.713909  356651 out.go:368] Setting JSON to false
	I1129 09:18:33.713926  356651 mustload.go:66] Loading cluster: newest-cni-020433
	I1129 09:18:33.714295  356651 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:33.714677  356651 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:33.735480  356651 host.go:66] Checking if "newest-cni-020433" exists ...
	I1129 09:18:33.735776  356651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:18:33.796905  356651 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2025-11-29 09:18:33.786175708 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:18:33.797550  356651 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-020433 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1129 09:18:33.800244  356651 out.go:179] * Pausing node newest-cni-020433 ... 
	I1129 09:18:33.801472  356651 host.go:66] Checking if "newest-cni-020433" exists ...
	I1129 09:18:33.801772  356651 ssh_runner.go:195] Run: systemctl --version
	I1129 09:18:33.801823  356651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:33.821013  356651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:33.924249  356651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:18:33.937450  356651 pause.go:52] kubelet running: true
	I1129 09:18:33.937525  356651 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:18:34.075734  356651 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:18:34.075828  356651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:18:34.146780  356651 cri.go:89] found id: "c812c112070adb5b5f9bdaf23aee437cad110ec9b1047b6413e4de4a8bef25e0"
	I1129 09:18:34.146809  356651 cri.go:89] found id: "1466ac1e233eec9fe67dd8a0194308e6dedef82d9e2679668d0e7f5cfdf904be"
	I1129 09:18:34.146818  356651 cri.go:89] found id: "5d737be9886c464ab2ee4b01f6470c5147ee1d043d43b8028fca68dff34978c1"
	I1129 09:18:34.146823  356651 cri.go:89] found id: "cc99378f3afa1b471c013ab021cd9149dfe1e247b71c547969ec37147062fe7a"
	I1129 09:18:34.146827  356651 cri.go:89] found id: "78d9ae9dc233e43c0a5758db285e1a9283d698f9ec56f9f3e6086457bf96931f"
	I1129 09:18:34.146832  356651 cri.go:89] found id: "38ea4a65fa801144421a1e26140edce1f295aaa42e72beef2ceab2847588e64c"
	I1129 09:18:34.146836  356651 cri.go:89] found id: ""
	I1129 09:18:34.146907  356651 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:18:34.158774  356651 retry.go:31] will retry after 242.434304ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:34Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:18:34.402375  356651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:18:34.415797  356651 pause.go:52] kubelet running: false
	I1129 09:18:34.415873  356651 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:18:34.528935  356651 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:18:34.529033  356651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:18:34.600332  356651 cri.go:89] found id: "c812c112070adb5b5f9bdaf23aee437cad110ec9b1047b6413e4de4a8bef25e0"
	I1129 09:18:34.600364  356651 cri.go:89] found id: "1466ac1e233eec9fe67dd8a0194308e6dedef82d9e2679668d0e7f5cfdf904be"
	I1129 09:18:34.600371  356651 cri.go:89] found id: "5d737be9886c464ab2ee4b01f6470c5147ee1d043d43b8028fca68dff34978c1"
	I1129 09:18:34.600377  356651 cri.go:89] found id: "cc99378f3afa1b471c013ab021cd9149dfe1e247b71c547969ec37147062fe7a"
	I1129 09:18:34.600382  356651 cri.go:89] found id: "78d9ae9dc233e43c0a5758db285e1a9283d698f9ec56f9f3e6086457bf96931f"
	I1129 09:18:34.600388  356651 cri.go:89] found id: "38ea4a65fa801144421a1e26140edce1f295aaa42e72beef2ceab2847588e64c"
	I1129 09:18:34.600393  356651 cri.go:89] found id: ""
	I1129 09:18:34.600463  356651 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:18:34.612588  356651 retry.go:31] will retry after 347.439545ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:34Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:18:34.961263  356651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:18:34.974750  356651 pause.go:52] kubelet running: false
	I1129 09:18:34.974812  356651 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:18:35.091056  356651 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:18:35.091135  356651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:18:35.161422  356651 cri.go:89] found id: "c812c112070adb5b5f9bdaf23aee437cad110ec9b1047b6413e4de4a8bef25e0"
	I1129 09:18:35.161450  356651 cri.go:89] found id: "1466ac1e233eec9fe67dd8a0194308e6dedef82d9e2679668d0e7f5cfdf904be"
	I1129 09:18:35.161456  356651 cri.go:89] found id: "5d737be9886c464ab2ee4b01f6470c5147ee1d043d43b8028fca68dff34978c1"
	I1129 09:18:35.161461  356651 cri.go:89] found id: "cc99378f3afa1b471c013ab021cd9149dfe1e247b71c547969ec37147062fe7a"
	I1129 09:18:35.161465  356651 cri.go:89] found id: "78d9ae9dc233e43c0a5758db285e1a9283d698f9ec56f9f3e6086457bf96931f"
	I1129 09:18:35.161470  356651 cri.go:89] found id: "38ea4a65fa801144421a1e26140edce1f295aaa42e72beef2ceab2847588e64c"
	I1129 09:18:35.161475  356651 cri.go:89] found id: ""
	I1129 09:18:35.161530  356651 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:18:35.173531  356651 retry.go:31] will retry after 749.49908ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:35Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:18:35.923480  356651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:18:35.936951  356651 pause.go:52] kubelet running: false
	I1129 09:18:35.937042  356651 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1129 09:18:36.048228  356651 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1129 09:18:36.048323  356651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1129 09:18:36.116312  356651 cri.go:89] found id: "c812c112070adb5b5f9bdaf23aee437cad110ec9b1047b6413e4de4a8bef25e0"
	I1129 09:18:36.116332  356651 cri.go:89] found id: "1466ac1e233eec9fe67dd8a0194308e6dedef82d9e2679668d0e7f5cfdf904be"
	I1129 09:18:36.116336  356651 cri.go:89] found id: "5d737be9886c464ab2ee4b01f6470c5147ee1d043d43b8028fca68dff34978c1"
	I1129 09:18:36.116339  356651 cri.go:89] found id: "cc99378f3afa1b471c013ab021cd9149dfe1e247b71c547969ec37147062fe7a"
	I1129 09:18:36.116342  356651 cri.go:89] found id: "78d9ae9dc233e43c0a5758db285e1a9283d698f9ec56f9f3e6086457bf96931f"
	I1129 09:18:36.116346  356651 cri.go:89] found id: "38ea4a65fa801144421a1e26140edce1f295aaa42e72beef2ceab2847588e64c"
	I1129 09:18:36.116349  356651 cri.go:89] found id: ""
	I1129 09:18:36.116391  356651 ssh_runner.go:195] Run: sudo runc list -f json
	I1129 09:18:36.130601  356651 out.go:203] 
	W1129 09:18:36.131756  356651 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1129 09:18:36.131774  356651 out.go:285] * 
	* 
	W1129 09:18:36.135815  356651 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1129 09:18:36.136975  356651 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-020433 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-020433
helpers_test.go:243: (dbg) docker inspect newest-cni-020433:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063",
	        "Created": "2025-11-29T09:17:38.486313312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 354856,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:18:23.142599404Z",
	            "FinishedAt": "2025-11-29T09:18:22.294990164Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063/hostname",
	        "HostsPath": "/var/lib/docker/containers/a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063/hosts",
	        "LogPath": "/var/lib/docker/containers/a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063/a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063-json.log",
	        "Name": "/newest-cni-020433",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-020433:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-020433",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063",
	                "LowerDir": "/var/lib/docker/overlay2/a8a6ba38910989b11fc84ca9f5e0a6bd875cd888d1b48820e429d717fc735951-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8a6ba38910989b11fc84ca9f5e0a6bd875cd888d1b48820e429d717fc735951/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8a6ba38910989b11fc84ca9f5e0a6bd875cd888d1b48820e429d717fc735951/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8a6ba38910989b11fc84ca9f5e0a6bd875cd888d1b48820e429d717fc735951/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-020433",
	                "Source": "/var/lib/docker/volumes/newest-cni-020433/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-020433",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-020433",
	                "name.minikube.sigs.k8s.io": "newest-cni-020433",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7e91697c38934e47c30a91f625f1bde5cdc7cd70b30bdde230a2232ce70df9f6",
	            "SandboxKey": "/var/run/docker/netns/7e91697c3893",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-020433": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "aef7b8e187de0f8bf6cc69caec08dbd4417b8aa19d6d09df2b42cb2151e49057",
	                    "EndpointID": "60229fccf4ffdbd9e664e54e8e9ce771a02b6d6f29f4263b377ace8b2b6c8f30",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "62:ba:54:91:03:26",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-020433",
	                        "a9ac1a439ce6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-020433 -n newest-cni-020433
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-020433 -n newest-cni-020433: exit status 2 (337.159208ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-020433 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-632243 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ old-k8s-version-680646 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-680646 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ no-preload-897274 image list --format=json                                                                                                                                                                                                    │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-897274 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-020433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:18 UTC │
	│ delete  │ -p no-preload-897274                                                                                                                                                                                                                          │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ delete  │ -p no-preload-897274                                                                                                                                                                                                                          │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ default-k8s-diff-port-632243 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p default-k8s-diff-port-632243 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-020433 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ image   │ embed-certs-160987 image list --format=json                                                                                                                                                                                                   │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ pause   │ -p embed-certs-160987 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-632243                                                                                                                                                                                                               │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ stop    │ -p newest-cni-020433 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ delete  │ -p default-k8s-diff-port-632243                                                                                                                                                                                                               │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ delete  │ -p embed-certs-160987                                                                                                                                                                                                                         │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ delete  │ -p embed-certs-160987                                                                                                                                                                                                                         │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ addons  │ enable dashboard -p newest-cni-020433 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ start   │ -p newest-cni-020433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ image   │ newest-cni-020433 image list --format=json                                                                                                                                                                                                    │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ pause   │ -p newest-cni-020433 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:18:22
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:18:22.913289  354652 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:18:22.913543  354652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:22.913551  354652 out.go:374] Setting ErrFile to fd 2...
	I1129 09:18:22.913555  354652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:22.913776  354652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:18:22.914256  354652 out.go:368] Setting JSON to false
	I1129 09:18:22.915265  354652 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3655,"bootTime":1764404248,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:18:22.915328  354652 start.go:143] virtualization: kvm guest
	I1129 09:18:22.917011  354652 out.go:179] * [newest-cni-020433] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:18:22.918169  354652 notify.go:221] Checking for updates...
	I1129 09:18:22.918207  354652 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:18:22.919406  354652 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:18:22.920536  354652 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:18:22.921900  354652 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:18:22.922969  354652 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:18:22.924112  354652 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:18:22.925532  354652 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:22.926103  354652 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:18:22.949902  354652 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:18:22.949997  354652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:18:23.008467  354652 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:44 SystemTime:2025-11-29 09:18:22.998346617 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:18:23.008583  354652 docker.go:319] overlay module found
	I1129 09:18:23.010450  354652 out.go:179] * Using the docker driver based on existing profile
	I1129 09:18:23.011562  354652 start.go:309] selected driver: docker
	I1129 09:18:23.011577  354652 start.go:927] validating driver "docker" against &{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:18:23.011670  354652 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:18:23.012271  354652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:18:23.069368  354652 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:44 SystemTime:2025-11-29 09:18:23.058204556 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:18:23.069670  354652 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:18:23.069697  354652 cni.go:84] Creating CNI manager for ""
	I1129 09:18:23.069758  354652 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:18:23.069798  354652 start.go:353] cluster config:
	{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:18:23.071469  354652 out.go:179] * Starting "newest-cni-020433" primary control-plane node in "newest-cni-020433" cluster
	I1129 09:18:23.072741  354652 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:18:23.074047  354652 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:18:23.075089  354652 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:18:23.075123  354652 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:18:23.075131  354652 cache.go:65] Caching tarball of preloaded images
	I1129 09:18:23.075237  354652 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:18:23.075256  354652 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:18:23.075266  354652 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:18:23.075368  354652 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:18:23.096739  354652 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:18:23.096759  354652 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:18:23.096776  354652 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:18:23.096807  354652 start.go:360] acquireMachinesLock for newest-cni-020433: {Name:mk6347901682a01c9d317c6a402722ce1e16792e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:18:23.096885  354652 start.go:364] duration metric: took 59.245µs to acquireMachinesLock for "newest-cni-020433"
	I1129 09:18:23.096904  354652 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:18:23.096909  354652 fix.go:54] fixHost starting: 
	I1129 09:18:23.097115  354652 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:23.115627  354652 fix.go:112] recreateIfNeeded on newest-cni-020433: state=Stopped err=<nil>
	W1129 09:18:23.115666  354652 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 09:18:23.117441  354652 out.go:252] * Restarting existing docker container for "newest-cni-020433" ...
	I1129 09:18:23.117513  354652 cli_runner.go:164] Run: docker start newest-cni-020433
	I1129 09:18:23.365176  354652 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:23.385709  354652 kic.go:430] container "newest-cni-020433" state is running.
	I1129 09:18:23.386157  354652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:18:23.407163  354652 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:18:23.407479  354652 machine.go:94] provisionDockerMachine start ...
	I1129 09:18:23.407552  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:23.426601  354652 main.go:143] libmachine: Using SSH client type: native
	I1129 09:18:23.426861  354652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1129 09:18:23.426876  354652 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:18:23.427481  354652 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43040->127.0.0.1:33134: read: connection reset by peer
	I1129 09:18:26.574027  354652 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-020433
	
	I1129 09:18:26.574058  354652 ubuntu.go:182] provisioning hostname "newest-cni-020433"
	I1129 09:18:26.574126  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:26.593703  354652 main.go:143] libmachine: Using SSH client type: native
	I1129 09:18:26.593937  354652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1129 09:18:26.593952  354652 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-020433 && echo "newest-cni-020433" | sudo tee /etc/hostname
	I1129 09:18:26.748793  354652 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-020433
	
	I1129 09:18:26.748900  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:26.768520  354652 main.go:143] libmachine: Using SSH client type: native
	I1129 09:18:26.768766  354652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1129 09:18:26.768785  354652 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-020433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-020433/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-020433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:18:26.913768  354652 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:18:26.913798  354652 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:18:26.913825  354652 ubuntu.go:190] setting up certificates
	I1129 09:18:26.913854  354652 provision.go:84] configureAuth start
	I1129 09:18:26.913908  354652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:18:26.932382  354652 provision.go:143] copyHostCerts
	I1129 09:18:26.932451  354652 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:18:26.932462  354652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:18:26.932537  354652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:18:26.932667  354652 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:18:26.932678  354652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:18:26.932713  354652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:18:26.932789  354652 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:18:26.932797  354652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:18:26.932824  354652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:18:26.932918  354652 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.newest-cni-020433 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-020433]
	I1129 09:18:27.019502  354652 provision.go:177] copyRemoteCerts
	I1129 09:18:27.019562  354652 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:18:27.019636  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:27.038425  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:27.140390  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:18:27.158981  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:18:27.177649  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:18:27.196616  354652 provision.go:87] duration metric: took 282.7463ms to configureAuth
	I1129 09:18:27.196647  354652 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:18:27.196878  354652 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:27.197022  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:27.216032  354652 main.go:143] libmachine: Using SSH client type: native
	I1129 09:18:27.216354  354652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1129 09:18:27.216384  354652 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:18:27.519413  354652 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:18:27.519438  354652 machine.go:97] duration metric: took 4.111941352s to provisionDockerMachine
	I1129 09:18:27.519452  354652 start.go:293] postStartSetup for "newest-cni-020433" (driver="docker")
	I1129 09:18:27.519462  354652 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:18:27.519558  354652 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:18:27.519606  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:27.539047  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:27.641862  354652 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:18:27.645552  354652 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:18:27.645575  354652 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:18:27.645586  354652 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:18:27.645638  354652 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:18:27.645706  354652 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:18:27.645794  354652 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:18:27.653726  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:18:27.672082  354652 start.go:296] duration metric: took 152.616777ms for postStartSetup
	I1129 09:18:27.672175  354652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:18:27.672233  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:27.690654  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:27.790337  354652 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:18:27.794938  354652 fix.go:56] duration metric: took 4.698020903s for fixHost
	I1129 09:18:27.794968  354652 start.go:83] releasing machines lock for "newest-cni-020433", held for 4.698071315s
	I1129 09:18:27.795092  354652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:18:27.813713  354652 ssh_runner.go:195] Run: cat /version.json
	I1129 09:18:27.813755  354652 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:18:27.813759  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:27.813812  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:27.834323  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:27.834651  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:27.984613  354652 ssh_runner.go:195] Run: systemctl --version
	I1129 09:18:27.991157  354652 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:18:28.028018  354652 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:18:28.032919  354652 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:18:28.032985  354652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:18:28.041157  354652 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:18:28.041179  354652 start.go:496] detecting cgroup driver to use...
	I1129 09:18:28.041215  354652 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:18:28.041250  354652 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:18:28.055957  354652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:18:28.069029  354652 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:18:28.069091  354652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:18:28.084197  354652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:18:28.097201  354652 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:18:28.177230  354652 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:18:28.260633  354652 docker.go:234] disabling docker service ...
	I1129 09:18:28.260710  354652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:18:28.275602  354652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:18:28.288562  354652 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:18:28.370456  354652 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:18:28.454297  354652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:18:28.466957  354652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:18:28.481688  354652 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:18:28.481741  354652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:18:28.491235  354652 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:18:28.491311  354652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:18:28.500654  354652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:18:28.510276  354652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:18:28.519652  354652 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:18:28.528362  354652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:18:28.537539  354652 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:18:28.546175  354652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:18:28.555401  354652 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:18:28.563104  354652 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:18:28.571284  354652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:18:28.651185  354652 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:18:28.781462  354652 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:18:28.781526  354652 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:18:28.785726  354652 start.go:564] Will wait 60s for crictl version
	I1129 09:18:28.785791  354652 ssh_runner.go:195] Run: which crictl
	I1129 09:18:28.789542  354652 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:18:28.813688  354652 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:18:28.813765  354652 ssh_runner.go:195] Run: crio --version
	I1129 09:18:28.842352  354652 ssh_runner.go:195] Run: crio --version
	I1129 09:18:28.872543  354652 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:18:28.873926  354652 cli_runner.go:164] Run: docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:18:28.892798  354652 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 09:18:28.897219  354652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:18:28.909344  354652 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1129 09:18:28.910519  354652 kubeadm.go:884] updating cluster {Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:18:28.910669  354652 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:18:28.910732  354652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:18:28.943507  354652 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:18:28.943530  354652 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:18:28.943576  354652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:18:28.970109  354652 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:18:28.970136  354652 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:18:28.970143  354652 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 09:18:28.970251  354652 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-020433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:18:28.970324  354652 ssh_runner.go:195] Run: crio config
	I1129 09:18:29.019432  354652 cni.go:84] Creating CNI manager for ""
	I1129 09:18:29.019463  354652 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:18:29.019484  354652 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1129 09:18:29.019511  354652 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-020433 NodeName:newest-cni-020433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:18:29.019719  354652 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-020433"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:18:29.019794  354652 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:18:29.028444  354652 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:18:29.028503  354652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:18:29.036431  354652 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 09:18:29.049487  354652 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:18:29.062574  354652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1129 09:18:29.075938  354652 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:18:29.079976  354652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:18:29.090904  354652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:18:29.169012  354652 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:18:29.194386  354652 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433 for IP: 192.168.76.2
	I1129 09:18:29.194426  354652 certs.go:195] generating shared ca certs ...
	I1129 09:18:29.194449  354652 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:29.194605  354652 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:18:29.194643  354652 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:18:29.194652  354652 certs.go:257] generating profile certs ...
	I1129 09:18:29.194740  354652 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key
	I1129 09:18:29.194805  354652 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70
	I1129 09:18:29.194866  354652 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key
	I1129 09:18:29.194982  354652 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:18:29.195015  354652 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:18:29.195025  354652 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:18:29.195049  354652 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:18:29.195077  354652 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:18:29.195102  354652 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:18:29.195145  354652 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:18:29.195819  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:18:29.215575  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:18:29.235466  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:18:29.256295  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:18:29.281061  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:18:29.300152  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 09:18:29.318868  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:18:29.337971  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:18:29.356698  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:18:29.375486  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:18:29.395137  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:18:29.414138  354652 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:18:29.427192  354652 ssh_runner.go:195] Run: openssl version
	I1129 09:18:29.433526  354652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:18:29.442578  354652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:18:29.446722  354652 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:18:29.446791  354652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:18:29.481498  354652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:18:29.490132  354652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:18:29.499329  354652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:18:29.503493  354652 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:18:29.503551  354652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:18:29.538088  354652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:18:29.547245  354652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:18:29.556393  354652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:18:29.560581  354652 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:18:29.560658  354652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:18:29.597444  354652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:18:29.606341  354652 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:18:29.610482  354652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:18:29.645309  354652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:18:29.680185  354652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:18:29.720879  354652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:18:29.763605  354652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:18:29.809504  354652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:18:29.865491  354652 kubeadm.go:401] StartCluster: {Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:18:29.865603  354652 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:18:29.865681  354652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:18:29.897938  354652 cri.go:89] found id: "5d737be9886c464ab2ee4b01f6470c5147ee1d043d43b8028fca68dff34978c1"
	I1129 09:18:29.897973  354652 cri.go:89] found id: "cc99378f3afa1b471c013ab021cd9149dfe1e247b71c547969ec37147062fe7a"
	I1129 09:18:29.897979  354652 cri.go:89] found id: "78d9ae9dc233e43c0a5758db285e1a9283d698f9ec56f9f3e6086457bf96931f"
	I1129 09:18:29.897983  354652 cri.go:89] found id: "38ea4a65fa801144421a1e26140edce1f295aaa42e72beef2ceab2847588e64c"
	I1129 09:18:29.897987  354652 cri.go:89] found id: ""
	I1129 09:18:29.898037  354652 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 09:18:29.910708  354652 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:29Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:18:29.910790  354652 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:18:29.919545  354652 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:18:29.919578  354652 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:18:29.919631  354652 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:18:29.927474  354652 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:18:29.927889  354652 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-020433" does not appear in /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:18:29.928022  354652 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-5652/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-020433" cluster setting kubeconfig missing "newest-cni-020433" context setting]
	I1129 09:18:29.928321  354652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:29.929486  354652 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:18:29.937695  354652 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1129 09:18:29.937739  354652 kubeadm.go:602] duration metric: took 18.154953ms to restartPrimaryControlPlane
	I1129 09:18:29.937752  354652 kubeadm.go:403] duration metric: took 72.273654ms to StartCluster
	I1129 09:18:29.937771  354652 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:29.937881  354652 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:18:29.938477  354652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:29.938714  354652 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:18:29.938781  354652 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:18:29.938910  354652 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-020433"
	I1129 09:18:29.938930  354652 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-020433"
	W1129 09:18:29.938943  354652 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:18:29.938942  354652 addons.go:70] Setting dashboard=true in profile "newest-cni-020433"
	I1129 09:18:29.938966  354652 addons.go:239] Setting addon dashboard=true in "newest-cni-020433"
	I1129 09:18:29.938972  354652 host.go:66] Checking if "newest-cni-020433" exists ...
	W1129 09:18:29.938980  354652 addons.go:248] addon dashboard should already be in state true
	I1129 09:18:29.938979  354652 addons.go:70] Setting default-storageclass=true in profile "newest-cni-020433"
	I1129 09:18:29.939011  354652 host.go:66] Checking if "newest-cni-020433" exists ...
	I1129 09:18:29.939012  354652 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:29.939015  354652 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-020433"
	I1129 09:18:29.939462  354652 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:29.939492  354652 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:29.939529  354652 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:29.944407  354652 out.go:179] * Verifying Kubernetes components...
	I1129 09:18:29.945756  354652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:18:29.967660  354652 addons.go:239] Setting addon default-storageclass=true in "newest-cni-020433"
	W1129 09:18:29.967686  354652 addons.go:248] addon default-storageclass should already be in state true
	I1129 09:18:29.967719  354652 host.go:66] Checking if "newest-cni-020433" exists ...
	I1129 09:18:29.968224  354652 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:29.968702  354652 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:18:29.970033  354652 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 09:18:29.970082  354652 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:18:29.970098  354652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:18:29.970155  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:29.972364  354652 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 09:18:29.973592  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 09:18:29.973614  354652 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 09:18:29.973680  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:30.001828  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:30.005940  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:30.008996  354652 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:18:30.009023  354652 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:18:30.009083  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:30.035326  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:30.108596  354652 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:18:30.124649  354652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:18:30.125762  354652 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:18:30.125820  354652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:18:30.127338  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 09:18:30.127439  354652 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 09:18:30.145021  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 09:18:30.145051  354652 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 09:18:30.147415  354652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:18:30.163720  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 09:18:30.163747  354652 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 09:18:30.181041  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 09:18:30.181065  354652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 09:18:30.199375  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 09:18:30.199405  354652 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 09:18:30.218945  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 09:18:30.218974  354652 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 09:18:30.232479  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 09:18:30.232511  354652 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 09:18:30.246685  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 09:18:30.246723  354652 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 09:18:30.260912  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:18:30.260939  354652 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 09:18:30.274728  354652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:18:31.962140  354652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.837414505s)
	I1129 09:18:31.962216  354652 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.836375812s)
	I1129 09:18:31.962250  354652 api_server.go:72] duration metric: took 2.023508709s to wait for apiserver process to appear ...
	I1129 09:18:31.962259  354652 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:18:31.962278  354652 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:18:31.962311  354652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.814870664s)
	I1129 09:18:31.962430  354652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.687663774s)
	I1129 09:18:31.964213  354652 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-020433 addons enable metrics-server
	
	I1129 09:18:31.970199  354652 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:18:31.970230  354652 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:18:31.978585  354652 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1129 09:18:31.979771  354652 addons.go:530] duration metric: took 2.041002343s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1129 09:18:32.463030  354652 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:18:32.467613  354652 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:18:32.467645  354652 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:18:32.963259  354652 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:18:32.967852  354652 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:18:32.969016  354652 api_server.go:141] control plane version: v1.34.1
	I1129 09:18:32.969074  354652 api_server.go:131] duration metric: took 1.006809193s to wait for apiserver health ...
	I1129 09:18:32.969086  354652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:18:32.972682  354652 system_pods.go:59] 8 kube-system pods found
	I1129 09:18:32.972727  354652 system_pods.go:61] "coredns-66bc5c9577-h8nqv" [c8cbc934-0df3-44c5-a3d7-fff7ca54ef86] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 09:18:32.972737  354652 system_pods.go:61] "etcd-newest-cni-020433" [47991984-6243-463b-9cda-95d0e18b6092] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:18:32.972748  354652 system_pods.go:61] "kindnet-gxgwn" [7e13d750-7bcf-4e2a-9663-512ecc23781a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 09:18:32.972758  354652 system_pods.go:61] "kube-apiserver-newest-cni-020433" [20641eff-ff31-4e31-8983-1075116bcdd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:18:32.972763  354652 system_pods.go:61] "kube-controller-manager-newest-cni-020433" [f5bece62-e41a-4cf6-bacc-29d4dd0754cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:18:32.972784  354652 system_pods.go:61] "kube-proxy-nqwzp" [118d6bdc-5c33-4ab5-bee8-6f8a3447c461] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:18:32.972796  354652 system_pods.go:61] "kube-scheduler-newest-cni-020433" [3224b587-95a1-4963-88ae-af38a3bd1d84] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:18:32.972800  354652 system_pods.go:61] "storage-provisioner" [30a16c03-a054-435c-8eec-ce64486eb6c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 09:18:32.972807  354652 system_pods.go:74] duration metric: took 3.715736ms to wait for pod list to return data ...
	I1129 09:18:32.972817  354652 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:18:32.975621  354652 default_sa.go:45] found service account: "default"
	I1129 09:18:32.975644  354652 default_sa.go:55] duration metric: took 2.822466ms for default service account to be created ...
	I1129 09:18:32.975656  354652 kubeadm.go:587] duration metric: took 3.036913728s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:18:32.975692  354652 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:18:32.978703  354652 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:18:32.978732  354652 node_conditions.go:123] node cpu capacity is 8
	I1129 09:18:32.978746  354652 node_conditions.go:105] duration metric: took 3.050623ms to run NodePressure ...
	I1129 09:18:32.978761  354652 start.go:242] waiting for startup goroutines ...
	I1129 09:18:32.978771  354652 start.go:247] waiting for cluster config update ...
	I1129 09:18:32.978785  354652 start.go:256] writing updated cluster config ...
	I1129 09:18:32.979099  354652 ssh_runner.go:195] Run: rm -f paused
	I1129 09:18:33.029926  354652 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:18:33.032684  354652 out.go:179] * Done! kubectl is now configured to use "newest-cni-020433" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.571748272Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-nqwzp/POD" id=f9abdb11-2ad5-4a5d-9915-a6ea78774a31 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.57180103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.572991207Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.573821374Z" level=info msg="Ran pod sandbox dd7cc21d3b639aebcdf23170aee53b8d2093fd875b54b274e5ff42839b1a0024 with infra container: kube-system/kindnet-gxgwn/POD" id=b6d7b83e-a9c7-490b-a966-5086bc6a730c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.5751531Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a07a1d19-7ef7-4744-beb0-3964b72fde1b name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.575732231Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f9abdb11-2ad5-4a5d-9915-a6ea78774a31 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.576525063Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f4d84ca1-27a2-4541-8b8d-1c77c719af18 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.577525258Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.577724164Z" level=info msg="Creating container: kube-system/kindnet-gxgwn/kindnet-cni" id=3b8f28a4-8545-4309-8425-abac057bca74 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.577821842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.578483435Z" level=info msg="Ran pod sandbox 9c813698072e9895f5566d154b5a0e5b3d792c34341f479821cb6027ce0f7a5c with infra container: kube-system/kube-proxy-nqwzp/POD" id=f9abdb11-2ad5-4a5d-9915-a6ea78774a31 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.579468393Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=61022e37-970b-42e7-aa9f-b049c0877fa2 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.581348164Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=976f6058-11a4-4f77-ba83-fe75e6d222f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.582168216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.583341296Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.583446868Z" level=info msg="Creating container: kube-system/kube-proxy-nqwzp/kube-proxy" id=6e898d0a-4e31-4478-9a21-1c4114506804 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.583602218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.588739214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.589349278Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.617144868Z" level=info msg="Created container 1466ac1e233eec9fe67dd8a0194308e6dedef82d9e2679668d0e7f5cfdf904be: kube-system/kindnet-gxgwn/kindnet-cni" id=3b8f28a4-8545-4309-8425-abac057bca74 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.617883707Z" level=info msg="Starting container: 1466ac1e233eec9fe67dd8a0194308e6dedef82d9e2679668d0e7f5cfdf904be" id=382d9f92-ed18-48fc-b7f9-297a161c8ba9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.619936016Z" level=info msg="Started container" PID=1056 containerID=1466ac1e233eec9fe67dd8a0194308e6dedef82d9e2679668d0e7f5cfdf904be description=kube-system/kindnet-gxgwn/kindnet-cni id=382d9f92-ed18-48fc-b7f9-297a161c8ba9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd7cc21d3b639aebcdf23170aee53b8d2093fd875b54b274e5ff42839b1a0024
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.620601192Z" level=info msg="Created container c812c112070adb5b5f9bdaf23aee437cad110ec9b1047b6413e4de4a8bef25e0: kube-system/kube-proxy-nqwzp/kube-proxy" id=6e898d0a-4e31-4478-9a21-1c4114506804 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.62133129Z" level=info msg="Starting container: c812c112070adb5b5f9bdaf23aee437cad110ec9b1047b6413e4de4a8bef25e0" id=7b620ca1-6671-4135-8eeb-c40006ac4cab name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.624931873Z" level=info msg="Started container" PID=1057 containerID=c812c112070adb5b5f9bdaf23aee437cad110ec9b1047b6413e4de4a8bef25e0 description=kube-system/kube-proxy-nqwzp/kube-proxy id=7b620ca1-6671-4135-8eeb-c40006ac4cab name=/runtime.v1.RuntimeService/StartContainer sandboxID=9c813698072e9895f5566d154b5a0e5b3d792c34341f479821cb6027ce0f7a5c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c812c112070ad       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   9c813698072e9       kube-proxy-nqwzp                            kube-system
	1466ac1e233ee       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   dd7cc21d3b639       kindnet-gxgwn                               kube-system
	5d737be9886c4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   770d7ad5d91d9       kube-apiserver-newest-cni-020433            kube-system
	cc99378f3afa1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   453a0795c6b25       kube-scheduler-newest-cni-020433            kube-system
	78d9ae9dc233e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   5e1d74100a901       etcd-newest-cni-020433                      kube-system
	38ea4a65fa801       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   95b60c2388bfc       kube-controller-manager-newest-cni-020433   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-020433
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-020433
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=newest-cni-020433
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_17_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:17:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-020433
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:18:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:18:31 +0000   Sat, 29 Nov 2025 09:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:18:31 +0000   Sat, 29 Nov 2025 09:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:18:31 +0000   Sat, 29 Nov 2025 09:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 29 Nov 2025 09:18:31 +0000   Sat, 29 Nov 2025 09:17:51 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-020433
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                9e13478b-5cce-4854-b5e2-d069a5e427ce
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-020433                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         42s
	  kube-system                 kindnet-gxgwn                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      36s
	  kube-system                 kube-apiserver-newest-cni-020433             250m (3%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-newest-cni-020433    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-nqwzp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-scheduler-newest-cni-020433             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 35s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node newest-cni-020433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node newest-cni-020433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node newest-cni-020433 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node newest-cni-020433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node newest-cni-020433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node newest-cni-020433 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                node-controller  Node newest-cni-020433 event: Registered Node newest-cni-020433 in Controller
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-020433 event: Registered Node newest-cni-020433 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 e1 87 e4 84 ae 08 06
	[  +0.002640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[ +14.778105] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +13.520989] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 32 aa eb 52 77 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +14.762906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df ae 93 da f6 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[Nov29 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[ +13.297626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	[  +7.390906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 68 f3 3d 3c e9 08 06
	[  +0.000436] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[  +7.145922] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea c3 2f 2a a3 01 08 06
	[  +0.000415] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	
	
	==> etcd [78d9ae9dc233e43c0a5758db285e1a9283d698f9ec56f9f3e6086457bf96931f] <==
	{"level":"warn","ts":"2025-11-29T09:18:30.800386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.811239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.821207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.828770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.835174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.842135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.850209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.856557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.862561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.869688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.880009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.887635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.893962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.901099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.908377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.914556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.929702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.937262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.944653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.951231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.957560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.981829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.989251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.996692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:31.049514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58594","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:37 up  1:01,  0 user,  load average: 2.10, 3.37, 2.43
	Linux newest-cni-020433 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1466ac1e233eec9fe67dd8a0194308e6dedef82d9e2679668d0e7f5cfdf904be] <==
	I1129 09:18:32.896434       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:18:32.896712       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 09:18:32.896875       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:18:32.896892       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:18:32.896913       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:18:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:18:33.098498       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:18:33.098532       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:18:33.098544       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:18:33.099663       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:18:33.419143       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:18:33.419181       1 metrics.go:72] Registering metrics
	I1129 09:18:33.419257       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [5d737be9886c464ab2ee4b01f6470c5147ee1d043d43b8028fca68dff34978c1] <==
	I1129 09:18:31.519730       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1129 09:18:31.519888       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:18:31.520355       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1129 09:18:31.520440       1 aggregator.go:171] initial CRD sync complete...
	I1129 09:18:31.520488       1 autoregister_controller.go:144] Starting autoregister controller
	I1129 09:18:31.520514       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 09:18:31.520542       1 cache.go:39] Caches are synced for autoregister controller
	I1129 09:18:31.520631       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 09:18:31.525734       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1129 09:18:31.529632       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 09:18:31.546217       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:18:31.549298       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:18:31.770631       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:18:31.802288       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:18:31.823518       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:18:31.833438       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:18:31.840442       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:18:31.879218       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.66.86"}
	I1129 09:18:31.892682       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.176.109"}
	I1129 09:18:32.422788       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:18:35.240235       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:18:35.240291       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:18:35.291047       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:18:35.440163       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:18:35.491957       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [38ea4a65fa801144421a1e26140edce1f295aaa42e72beef2ceab2847588e64c] <==
	I1129 09:18:34.875987       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:18:34.887630       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 09:18:34.887648       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:18:34.887666       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:18:34.887693       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 09:18:34.887695       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 09:18:34.887700       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:18:34.887777       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 09:18:34.887780       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 09:18:34.887777       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 09:18:34.889609       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 09:18:34.892337       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:18:34.893348       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1129 09:18:34.893454       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:18:34.895610       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 09:18:34.898883       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:18:34.898985       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:18:34.899052       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-020433"
	I1129 09:18:34.899104       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1129 09:18:34.902700       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:18:34.905124       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:18:34.912203       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:18:34.912263       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:18:34.912271       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:18:34.912279       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c812c112070adb5b5f9bdaf23aee437cad110ec9b1047b6413e4de4a8bef25e0] <==
	I1129 09:18:32.661314       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:18:32.731687       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:18:32.832545       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:18:32.832586       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 09:18:32.832664       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:18:32.851511       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:18:32.851569       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:18:32.856774       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:18:32.857256       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:18:32.857295       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:18:32.858703       1 config.go:200] "Starting service config controller"
	I1129 09:18:32.858717       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:18:32.858727       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:18:32.858735       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:18:32.858811       1 config.go:309] "Starting node config controller"
	I1129 09:18:32.858824       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:18:32.858804       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:18:32.858949       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:18:32.959003       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:18:32.959057       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:18:32.959077       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:18:32.959122       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [cc99378f3afa1b471c013ab021cd9149dfe1e247b71c547969ec37147062fe7a] <==
	I1129 09:18:30.325109       1 serving.go:386] Generated self-signed cert in-memory
	I1129 09:18:31.482595       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 09:18:31.482686       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:18:31.487159       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1129 09:18:31.487220       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1129 09:18:31.487316       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:18:31.487316       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:18:31.487350       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:18:31.487338       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:18:31.487876       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:18:31.487927       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 09:18:31.587764       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:18:31.587780       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:18:31.587900       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: E1129 09:18:31.301631     681 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-020433\" not found" node="newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: E1129 09:18:31.302166     681 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-020433\" not found" node="newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: E1129 09:18:31.302343     681 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-020433\" not found" node="newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.465059     681 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.544304     681 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.544426     681 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.544459     681 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.545431     681 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: E1129 09:18:31.580901     681 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-020433\" already exists" pod="kube-system/kube-apiserver-newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.580943     681 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: E1129 09:18:31.587988     681 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-020433\" already exists" pod="kube-system/kube-controller-manager-newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.588036     681 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: E1129 09:18:31.595049     681 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-020433\" already exists" pod="kube-system/kube-scheduler-newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.595096     681 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: E1129 09:18:31.602320     681 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-020433\" already exists" pod="kube-system/etcd-newest-cni-020433"
	Nov 29 09:18:32 newest-cni-020433 kubelet[681]: I1129 09:18:32.260691     681 apiserver.go:52] "Watching apiserver"
	Nov 29 09:18:32 newest-cni-020433 kubelet[681]: I1129 09:18:32.265591     681 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 29 09:18:32 newest-cni-020433 kubelet[681]: I1129 09:18:32.357930     681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/118d6bdc-5c33-4ab5-bee8-6f8a3447c461-xtables-lock\") pod \"kube-proxy-nqwzp\" (UID: \"118d6bdc-5c33-4ab5-bee8-6f8a3447c461\") " pod="kube-system/kube-proxy-nqwzp"
	Nov 29 09:18:32 newest-cni-020433 kubelet[681]: I1129 09:18:32.357988     681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e13d750-7bcf-4e2a-9663-512ecc23781a-xtables-lock\") pod \"kindnet-gxgwn\" (UID: \"7e13d750-7bcf-4e2a-9663-512ecc23781a\") " pod="kube-system/kindnet-gxgwn"
	Nov 29 09:18:32 newest-cni-020433 kubelet[681]: I1129 09:18:32.358067     681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/118d6bdc-5c33-4ab5-bee8-6f8a3447c461-lib-modules\") pod \"kube-proxy-nqwzp\" (UID: \"118d6bdc-5c33-4ab5-bee8-6f8a3447c461\") " pod="kube-system/kube-proxy-nqwzp"
	Nov 29 09:18:32 newest-cni-020433 kubelet[681]: I1129 09:18:32.358099     681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7e13d750-7bcf-4e2a-9663-512ecc23781a-cni-cfg\") pod \"kindnet-gxgwn\" (UID: \"7e13d750-7bcf-4e2a-9663-512ecc23781a\") " pod="kube-system/kindnet-gxgwn"
	Nov 29 09:18:32 newest-cni-020433 kubelet[681]: I1129 09:18:32.358120     681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e13d750-7bcf-4e2a-9663-512ecc23781a-lib-modules\") pod \"kindnet-gxgwn\" (UID: \"7e13d750-7bcf-4e2a-9663-512ecc23781a\") " pod="kube-system/kindnet-gxgwn"
	Nov 29 09:18:34 newest-cni-020433 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 09:18:34 newest-cni-020433 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 09:18:34 newest-cni-020433 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-020433 -n newest-cni-020433
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-020433 -n newest-cni-020433: exit status 2 (337.073468ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-020433 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-h8nqv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7q86r kubernetes-dashboard-855c9754f9-2n8qz
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-020433 describe pod coredns-66bc5c9577-h8nqv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7q86r kubernetes-dashboard-855c9754f9-2n8qz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-020433 describe pod coredns-66bc5c9577-h8nqv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7q86r kubernetes-dashboard-855c9754f9-2n8qz: exit status 1 (63.188375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-h8nqv" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-7q86r" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-2n8qz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-020433 describe pod coredns-66bc5c9577-h8nqv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7q86r kubernetes-dashboard-855c9754f9-2n8qz: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-020433
helpers_test.go:243: (dbg) docker inspect newest-cni-020433:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063",
	        "Created": "2025-11-29T09:17:38.486313312Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 354856,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:18:23.142599404Z",
	            "FinishedAt": "2025-11-29T09:18:22.294990164Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063/hostname",
	        "HostsPath": "/var/lib/docker/containers/a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063/hosts",
	        "LogPath": "/var/lib/docker/containers/a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063/a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063-json.log",
	        "Name": "/newest-cni-020433",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-020433:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-020433",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a9ac1a439ce6b1fb29944ad6280f3be5fb721f32ccaf7c000e9793dbab9d8063",
	                "LowerDir": "/var/lib/docker/overlay2/a8a6ba38910989b11fc84ca9f5e0a6bd875cd888d1b48820e429d717fc735951-init/diff:/var/lib/docker/overlay2/5b012372cfb54f6c71f4d7f0bca0124866eeda530eaa04bd84a67c7b4c8b35a8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a8a6ba38910989b11fc84ca9f5e0a6bd875cd888d1b48820e429d717fc735951/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a8a6ba38910989b11fc84ca9f5e0a6bd875cd888d1b48820e429d717fc735951/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a8a6ba38910989b11fc84ca9f5e0a6bd875cd888d1b48820e429d717fc735951/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-020433",
	                "Source": "/var/lib/docker/volumes/newest-cni-020433/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-020433",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-020433",
	                "name.minikube.sigs.k8s.io": "newest-cni-020433",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7e91697c38934e47c30a91f625f1bde5cdc7cd70b30bdde230a2232ce70df9f6",
	            "SandboxKey": "/var/run/docker/netns/7e91697c3893",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-020433": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "aef7b8e187de0f8bf6cc69caec08dbd4417b8aa19d6d09df2b42cb2151e49057",
	                    "EndpointID": "60229fccf4ffdbd9e664e54e8e9ce771a02b6d6f29f4263b377ace8b2b6c8f30",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "62:ba:54:91:03:26",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-020433",
	                        "a9ac1a439ce6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-020433 -n newest-cni-020433
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-020433 -n newest-cni-020433: exit status 2 (331.59434ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-020433 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-632243 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ old-k8s-version-680646 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-680646 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ no-preload-897274 image list --format=json                                                                                                                                                                                                    │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-897274 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-680646                                                                                                                                                                                                                     │ old-k8s-version-680646       │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-020433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:18 UTC │
	│ delete  │ -p no-preload-897274                                                                                                                                                                                                                          │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ delete  │ -p no-preload-897274                                                                                                                                                                                                                          │ no-preload-897274            │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ image   │ default-k8s-diff-port-632243 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:17 UTC │
	│ pause   │ -p default-k8s-diff-port-632243 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-020433 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ image   │ embed-certs-160987 image list --format=json                                                                                                                                                                                                   │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ pause   │ -p embed-certs-160987 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-632243                                                                                                                                                                                                               │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ stop    │ -p newest-cni-020433 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ delete  │ -p default-k8s-diff-port-632243                                                                                                                                                                                                               │ default-k8s-diff-port-632243 │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ delete  │ -p embed-certs-160987                                                                                                                                                                                                                         │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ delete  │ -p embed-certs-160987                                                                                                                                                                                                                         │ embed-certs-160987           │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ addons  │ enable dashboard -p newest-cni-020433 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ start   │ -p newest-cni-020433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ image   │ newest-cni-020433 image list --format=json                                                                                                                                                                                                    │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │ 29 Nov 25 09:18 UTC │
	│ pause   │ -p newest-cni-020433 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-020433            │ jenkins │ v1.37.0 │ 29 Nov 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:18:22
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:18:22.913289  354652 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:18:22.913543  354652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:22.913551  354652 out.go:374] Setting ErrFile to fd 2...
	I1129 09:18:22.913555  354652 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:18:22.913776  354652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:18:22.914256  354652 out.go:368] Setting JSON to false
	I1129 09:18:22.915265  354652 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3655,"bootTime":1764404248,"procs":281,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:18:22.915328  354652 start.go:143] virtualization: kvm guest
	I1129 09:18:22.917011  354652 out.go:179] * [newest-cni-020433] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:18:22.918169  354652 notify.go:221] Checking for updates...
	I1129 09:18:22.918207  354652 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:18:22.919406  354652 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:18:22.920536  354652 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:18:22.921900  354652 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:18:22.922969  354652 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:18:22.924112  354652 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:18:22.925532  354652 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:22.926103  354652 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:18:22.949902  354652 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:18:22.949997  354652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:18:23.008467  354652 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:44 SystemTime:2025-11-29 09:18:22.998346617 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:18:23.008583  354652 docker.go:319] overlay module found
	I1129 09:18:23.010450  354652 out.go:179] * Using the docker driver based on existing profile
	I1129 09:18:23.011562  354652 start.go:309] selected driver: docker
	I1129 09:18:23.011577  354652 start.go:927] validating driver "docker" against &{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:18:23.011670  354652 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:18:23.012271  354652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:18:23.069368  354652 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:44 SystemTime:2025-11-29 09:18:23.058204556 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:18:23.069670  354652 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:18:23.069697  354652 cni.go:84] Creating CNI manager for ""
	I1129 09:18:23.069758  354652 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:18:23.069798  354652 start.go:353] cluster config:
	{Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:18:23.071469  354652 out.go:179] * Starting "newest-cni-020433" primary control-plane node in "newest-cni-020433" cluster
	I1129 09:18:23.072741  354652 cache.go:134] Beginning downloading kic base image for docker with crio
	I1129 09:18:23.074047  354652 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:18:23.075089  354652 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:18:23.075123  354652 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:18:23.075131  354652 cache.go:65] Caching tarball of preloaded images
	I1129 09:18:23.075237  354652 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:18:23.075256  354652 preload.go:238] Found /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:18:23.075266  354652 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:18:23.075368  354652 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:18:23.096739  354652 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:18:23.096759  354652 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:18:23.096776  354652 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:18:23.096807  354652 start.go:360] acquireMachinesLock for newest-cni-020433: {Name:mk6347901682a01c9d317c6a402722ce1e16792e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:18:23.096885  354652 start.go:364] duration metric: took 59.245µs to acquireMachinesLock for "newest-cni-020433"
	I1129 09:18:23.096904  354652 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:18:23.096909  354652 fix.go:54] fixHost starting: 
	I1129 09:18:23.097115  354652 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:23.115627  354652 fix.go:112] recreateIfNeeded on newest-cni-020433: state=Stopped err=<nil>
	W1129 09:18:23.115666  354652 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 09:18:23.117441  354652 out.go:252] * Restarting existing docker container for "newest-cni-020433" ...
	I1129 09:18:23.117513  354652 cli_runner.go:164] Run: docker start newest-cni-020433
	I1129 09:18:23.365176  354652 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:23.385709  354652 kic.go:430] container "newest-cni-020433" state is running.
	I1129 09:18:23.386157  354652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:18:23.407163  354652 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/config.json ...
	I1129 09:18:23.407479  354652 machine.go:94] provisionDockerMachine start ...
	I1129 09:18:23.407552  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:23.426601  354652 main.go:143] libmachine: Using SSH client type: native
	I1129 09:18:23.426861  354652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1129 09:18:23.426876  354652 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:18:23.427481  354652 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43040->127.0.0.1:33134: read: connection reset by peer
	I1129 09:18:26.574027  354652 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-020433
	
	I1129 09:18:26.574058  354652 ubuntu.go:182] provisioning hostname "newest-cni-020433"
	I1129 09:18:26.574126  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:26.593703  354652 main.go:143] libmachine: Using SSH client type: native
	I1129 09:18:26.593937  354652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1129 09:18:26.593952  354652 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-020433 && echo "newest-cni-020433" | sudo tee /etc/hostname
	I1129 09:18:26.748793  354652 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-020433
	
	I1129 09:18:26.748900  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:26.768520  354652 main.go:143] libmachine: Using SSH client type: native
	I1129 09:18:26.768766  354652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1129 09:18:26.768785  354652 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-020433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-020433/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-020433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:18:26.913768  354652 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:18:26.913798  354652 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5652/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5652/.minikube}
	I1129 09:18:26.913825  354652 ubuntu.go:190] setting up certificates
	I1129 09:18:26.913854  354652 provision.go:84] configureAuth start
	I1129 09:18:26.913908  354652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:18:26.932382  354652 provision.go:143] copyHostCerts
	I1129 09:18:26.932451  354652 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem, removing ...
	I1129 09:18:26.932462  354652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem
	I1129 09:18:26.932537  354652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/ca.pem (1078 bytes)
	I1129 09:18:26.932667  354652 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem, removing ...
	I1129 09:18:26.932678  354652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem
	I1129 09:18:26.932713  354652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/cert.pem (1123 bytes)
	I1129 09:18:26.932789  354652 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem, removing ...
	I1129 09:18:26.932797  354652 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem
	I1129 09:18:26.932824  354652 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5652/.minikube/key.pem (1675 bytes)
	I1129 09:18:26.932918  354652 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem org=jenkins.newest-cni-020433 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-020433]
	I1129 09:18:27.019502  354652 provision.go:177] copyRemoteCerts
	I1129 09:18:27.019562  354652 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:18:27.019636  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:27.038425  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:27.140390  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:18:27.158981  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:18:27.177649  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:18:27.196616  354652 provision.go:87] duration metric: took 282.7463ms to configureAuth
	I1129 09:18:27.196647  354652 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:18:27.196878  354652 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:27.197022  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:27.216032  354652 main.go:143] libmachine: Using SSH client type: native
	I1129 09:18:27.216354  354652 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1129 09:18:27.216384  354652 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:18:27.519413  354652 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:18:27.519438  354652 machine.go:97] duration metric: took 4.111941352s to provisionDockerMachine
	I1129 09:18:27.519452  354652 start.go:293] postStartSetup for "newest-cni-020433" (driver="docker")
	I1129 09:18:27.519462  354652 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:18:27.519558  354652 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:18:27.519606  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:27.539047  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:27.641862  354652 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:18:27.645552  354652 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:18:27.645575  354652 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:18:27.645586  354652 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/addons for local assets ...
	I1129 09:18:27.645638  354652 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5652/.minikube/files for local assets ...
	I1129 09:18:27.645706  354652 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem -> 92162.pem in /etc/ssl/certs
	I1129 09:18:27.645794  354652 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:18:27.653726  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:18:27.672082  354652 start.go:296] duration metric: took 152.616777ms for postStartSetup
	I1129 09:18:27.672175  354652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:18:27.672233  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:27.690654  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:27.790337  354652 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:18:27.794938  354652 fix.go:56] duration metric: took 4.698020903s for fixHost
	I1129 09:18:27.794968  354652 start.go:83] releasing machines lock for "newest-cni-020433", held for 4.698071315s
	I1129 09:18:27.795092  354652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-020433
	I1129 09:18:27.813713  354652 ssh_runner.go:195] Run: cat /version.json
	I1129 09:18:27.813755  354652 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:18:27.813759  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:27.813812  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:27.834323  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:27.834651  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:27.984613  354652 ssh_runner.go:195] Run: systemctl --version
	I1129 09:18:27.991157  354652 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:18:28.028018  354652 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:18:28.032919  354652 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:18:28.032985  354652 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:18:28.041157  354652 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:18:28.041179  354652 start.go:496] detecting cgroup driver to use...
	I1129 09:18:28.041215  354652 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:18:28.041250  354652 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:18:28.055957  354652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:18:28.069029  354652 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:18:28.069091  354652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:18:28.084197  354652 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:18:28.097201  354652 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:18:28.177230  354652 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:18:28.260633  354652 docker.go:234] disabling docker service ...
	I1129 09:18:28.260710  354652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:18:28.275602  354652 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:18:28.288562  354652 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:18:28.370456  354652 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:18:28.454297  354652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:18:28.466957  354652 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:18:28.481688  354652 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:18:28.481741  354652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:18:28.491235  354652 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1129 09:18:28.491311  354652 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:18:28.500654  354652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:18:28.510276  354652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:18:28.519652  354652 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:18:28.528362  354652 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:18:28.537539  354652 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:18:28.546175  354652 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:18:28.555401  354652 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:18:28.563104  354652 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:18:28.571284  354652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:18:28.651185  354652 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:18:28.781462  354652 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:18:28.781526  354652 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:18:28.785726  354652 start.go:564] Will wait 60s for crictl version
	I1129 09:18:28.785791  354652 ssh_runner.go:195] Run: which crictl
	I1129 09:18:28.789542  354652 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:18:28.813688  354652 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1129 09:18:28.813765  354652 ssh_runner.go:195] Run: crio --version
	I1129 09:18:28.842352  354652 ssh_runner.go:195] Run: crio --version
	I1129 09:18:28.872543  354652 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1129 09:18:28.873926  354652 cli_runner.go:164] Run: docker network inspect newest-cni-020433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:18:28.892798  354652 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 09:18:28.897219  354652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:18:28.909344  354652 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1129 09:18:28.910519  354652 kubeadm.go:884] updating cluster {Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:18:28.910669  354652 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:18:28.910732  354652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:18:28.943507  354652 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:18:28.943530  354652 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:18:28.943576  354652 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:18:28.970109  354652 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:18:28.970136  354652 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:18:28.970143  354652 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1129 09:18:28.970251  354652 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-020433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:18:28.970324  354652 ssh_runner.go:195] Run: crio config
	I1129 09:18:29.019432  354652 cni.go:84] Creating CNI manager for ""
	I1129 09:18:29.019463  354652 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1129 09:18:29.019484  354652 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1129 09:18:29.019511  354652 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-020433 NodeName:newest-cni-020433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:18:29.019719  354652 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-020433"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:18:29.019794  354652 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:18:29.028444  354652 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:18:29.028503  354652 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:18:29.036431  354652 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1129 09:18:29.049487  354652 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:18:29.062574  354652 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1129 09:18:29.075938  354652 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:18:29.079976  354652 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:18:29.090904  354652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:18:29.169012  354652 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:18:29.194386  354652 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433 for IP: 192.168.76.2
	I1129 09:18:29.194426  354652 certs.go:195] generating shared ca certs ...
	I1129 09:18:29.194449  354652 certs.go:227] acquiring lock for ca certs: {Name:mk9945b3aa42b3d50b9bfd795af3a1c63f4e35bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:29.194605  354652 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key
	I1129 09:18:29.194643  354652 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key
	I1129 09:18:29.194652  354652 certs.go:257] generating profile certs ...
	I1129 09:18:29.194740  354652 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/client.key
	I1129 09:18:29.194805  354652 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key.22e84c70
	I1129 09:18:29.194866  354652 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key
	I1129 09:18:29.194982  354652 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem (1338 bytes)
	W1129 09:18:29.195015  354652 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216_empty.pem, impossibly tiny 0 bytes
	I1129 09:18:29.195025  354652 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca-key.pem (1679 bytes)
	I1129 09:18:29.195049  354652 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:18:29.195077  354652 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:18:29.195102  354652 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/certs/key.pem (1675 bytes)
	I1129 09:18:29.195145  354652 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem (1708 bytes)
	I1129 09:18:29.195819  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:18:29.215575  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:18:29.235466  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:18:29.256295  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1129 09:18:29.281061  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:18:29.300152  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 09:18:29.318868  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:18:29.337971  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/newest-cni-020433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:18:29.356698  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/certs/9216.pem --> /usr/share/ca-certificates/9216.pem (1338 bytes)
	I1129 09:18:29.375486  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/ssl/certs/92162.pem --> /usr/share/ca-certificates/92162.pem (1708 bytes)
	I1129 09:18:29.395137  354652 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:18:29.414138  354652 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:18:29.427192  354652 ssh_runner.go:195] Run: openssl version
	I1129 09:18:29.433526  354652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9216.pem && ln -fs /usr/share/ca-certificates/9216.pem /etc/ssl/certs/9216.pem"
	I1129 09:18:29.442578  354652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9216.pem
	I1129 09:18:29.446722  354652 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:34 /usr/share/ca-certificates/9216.pem
	I1129 09:18:29.446791  354652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9216.pem
	I1129 09:18:29.481498  354652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9216.pem /etc/ssl/certs/51391683.0"
	I1129 09:18:29.490132  354652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/92162.pem && ln -fs /usr/share/ca-certificates/92162.pem /etc/ssl/certs/92162.pem"
	I1129 09:18:29.499329  354652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/92162.pem
	I1129 09:18:29.503493  354652 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:34 /usr/share/ca-certificates/92162.pem
	I1129 09:18:29.503551  354652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/92162.pem
	I1129 09:18:29.538088  354652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/92162.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:18:29.547245  354652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:18:29.556393  354652 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:18:29.560581  354652 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:28 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:18:29.560658  354652 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:18:29.597444  354652 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:18:29.606341  354652 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:18:29.610482  354652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:18:29.645309  354652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:18:29.680185  354652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:18:29.720879  354652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:18:29.763605  354652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:18:29.809504  354652 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:18:29.865491  354652 kubeadm.go:401] StartCluster: {Name:newest-cni-020433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-020433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:18:29.865603  354652 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:18:29.865681  354652 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:18:29.897938  354652 cri.go:89] found id: "5d737be9886c464ab2ee4b01f6470c5147ee1d043d43b8028fca68dff34978c1"
	I1129 09:18:29.897973  354652 cri.go:89] found id: "cc99378f3afa1b471c013ab021cd9149dfe1e247b71c547969ec37147062fe7a"
	I1129 09:18:29.897979  354652 cri.go:89] found id: "78d9ae9dc233e43c0a5758db285e1a9283d698f9ec56f9f3e6086457bf96931f"
	I1129 09:18:29.897983  354652 cri.go:89] found id: "38ea4a65fa801144421a1e26140edce1f295aaa42e72beef2ceab2847588e64c"
	I1129 09:18:29.897987  354652 cri.go:89] found id: ""
	I1129 09:18:29.898037  354652 ssh_runner.go:195] Run: sudo runc list -f json
	W1129 09:18:29.910708  354652 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T09:18:29Z" level=error msg="open /run/runc: no such file or directory"
	I1129 09:18:29.910790  354652 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:18:29.919545  354652 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:18:29.919578  354652 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:18:29.919631  354652 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:18:29.927474  354652 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:18:29.927889  354652 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-020433" does not appear in /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:18:29.928022  354652 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-5652/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-020433" cluster setting kubeconfig missing "newest-cni-020433" context setting]
	I1129 09:18:29.928321  354652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:29.929486  354652 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:18:29.937695  354652 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1129 09:18:29.937739  354652 kubeadm.go:602] duration metric: took 18.154953ms to restartPrimaryControlPlane
	I1129 09:18:29.937752  354652 kubeadm.go:403] duration metric: took 72.273654ms to StartCluster
	I1129 09:18:29.937771  354652 settings.go:142] acquiring lock: {Name:mkebe0b2667fa03f51e92459cd7b7f37fd4c23bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:29.937881  354652 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:18:29.938477  354652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5652/kubeconfig: {Name:mk6280be5f60ace95b1c1acc672c087e2366542f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:18:29.938714  354652 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:18:29.938781  354652 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:18:29.938910  354652 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-020433"
	I1129 09:18:29.938930  354652 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-020433"
	W1129 09:18:29.938943  354652 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:18:29.938942  354652 addons.go:70] Setting dashboard=true in profile "newest-cni-020433"
	I1129 09:18:29.938966  354652 addons.go:239] Setting addon dashboard=true in "newest-cni-020433"
	I1129 09:18:29.938972  354652 host.go:66] Checking if "newest-cni-020433" exists ...
	W1129 09:18:29.938980  354652 addons.go:248] addon dashboard should already be in state true
	I1129 09:18:29.938979  354652 addons.go:70] Setting default-storageclass=true in profile "newest-cni-020433"
	I1129 09:18:29.939011  354652 host.go:66] Checking if "newest-cni-020433" exists ...
	I1129 09:18:29.939012  354652 config.go:182] Loaded profile config "newest-cni-020433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:18:29.939015  354652 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-020433"
	I1129 09:18:29.939462  354652 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:29.939492  354652 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:29.939529  354652 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:29.944407  354652 out.go:179] * Verifying Kubernetes components...
	I1129 09:18:29.945756  354652 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:18:29.967660  354652 addons.go:239] Setting addon default-storageclass=true in "newest-cni-020433"
	W1129 09:18:29.967686  354652 addons.go:248] addon default-storageclass should already be in state true
	I1129 09:18:29.967719  354652 host.go:66] Checking if "newest-cni-020433" exists ...
	I1129 09:18:29.968224  354652 cli_runner.go:164] Run: docker container inspect newest-cni-020433 --format={{.State.Status}}
	I1129 09:18:29.968702  354652 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:18:29.970033  354652 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 09:18:29.970082  354652 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:18:29.970098  354652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:18:29.970155  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:29.972364  354652 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 09:18:29.973592  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 09:18:29.973614  354652 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 09:18:29.973680  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:30.001828  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:30.005940  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:30.008996  354652 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:18:30.009023  354652 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:18:30.009083  354652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-020433
	I1129 09:18:30.035326  354652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/newest-cni-020433/id_rsa Username:docker}
	I1129 09:18:30.108596  354652 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:18:30.124649  354652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:18:30.125762  354652 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:18:30.125820  354652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:18:30.127338  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 09:18:30.127439  354652 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 09:18:30.145021  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 09:18:30.145051  354652 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 09:18:30.147415  354652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:18:30.163720  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 09:18:30.163747  354652 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 09:18:30.181041  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 09:18:30.181065  354652 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 09:18:30.199375  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 09:18:30.199405  354652 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 09:18:30.218945  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 09:18:30.218974  354652 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 09:18:30.232479  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 09:18:30.232511  354652 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 09:18:30.246685  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 09:18:30.246723  354652 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 09:18:30.260912  354652 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:18:30.260939  354652 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 09:18:30.274728  354652 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:18:31.962140  354652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.837414505s)
	I1129 09:18:31.962216  354652 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.836375812s)
	I1129 09:18:31.962250  354652 api_server.go:72] duration metric: took 2.023508709s to wait for apiserver process to appear ...
	I1129 09:18:31.962259  354652 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:18:31.962278  354652 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:18:31.962311  354652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.814870664s)
	I1129 09:18:31.962430  354652 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.687663774s)
	I1129 09:18:31.964213  354652 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-020433 addons enable metrics-server
	
	I1129 09:18:31.970199  354652 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:18:31.970230  354652 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:18:31.978585  354652 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1129 09:18:31.979771  354652 addons.go:530] duration metric: took 2.041002343s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1129 09:18:32.463030  354652 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:18:32.467613  354652 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:18:32.467645  354652 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:18:32.963259  354652 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:18:32.967852  354652 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:18:32.969016  354652 api_server.go:141] control plane version: v1.34.1
	I1129 09:18:32.969074  354652 api_server.go:131] duration metric: took 1.006809193s to wait for apiserver health ...
	I1129 09:18:32.969086  354652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:18:32.972682  354652 system_pods.go:59] 8 kube-system pods found
	I1129 09:18:32.972727  354652 system_pods.go:61] "coredns-66bc5c9577-h8nqv" [c8cbc934-0df3-44c5-a3d7-fff7ca54ef86] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 09:18:32.972737  354652 system_pods.go:61] "etcd-newest-cni-020433" [47991984-6243-463b-9cda-95d0e18b6092] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:18:32.972748  354652 system_pods.go:61] "kindnet-gxgwn" [7e13d750-7bcf-4e2a-9663-512ecc23781a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 09:18:32.972758  354652 system_pods.go:61] "kube-apiserver-newest-cni-020433" [20641eff-ff31-4e31-8983-1075116bcdd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:18:32.972763  354652 system_pods.go:61] "kube-controller-manager-newest-cni-020433" [f5bece62-e41a-4cf6-bacc-29d4dd0754cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:18:32.972784  354652 system_pods.go:61] "kube-proxy-nqwzp" [118d6bdc-5c33-4ab5-bee8-6f8a3447c461] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:18:32.972796  354652 system_pods.go:61] "kube-scheduler-newest-cni-020433" [3224b587-95a1-4963-88ae-af38a3bd1d84] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:18:32.972800  354652 system_pods.go:61] "storage-provisioner" [30a16c03-a054-435c-8eec-ce64486eb6c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 09:18:32.972807  354652 system_pods.go:74] duration metric: took 3.715736ms to wait for pod list to return data ...
	I1129 09:18:32.972817  354652 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:18:32.975621  354652 default_sa.go:45] found service account: "default"
	I1129 09:18:32.975644  354652 default_sa.go:55] duration metric: took 2.822466ms for default service account to be created ...
	I1129 09:18:32.975656  354652 kubeadm.go:587] duration metric: took 3.036913728s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:18:32.975692  354652 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:18:32.978703  354652 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:18:32.978732  354652 node_conditions.go:123] node cpu capacity is 8
	I1129 09:18:32.978746  354652 node_conditions.go:105] duration metric: took 3.050623ms to run NodePressure ...
	I1129 09:18:32.978761  354652 start.go:242] waiting for startup goroutines ...
	I1129 09:18:32.978771  354652 start.go:247] waiting for cluster config update ...
	I1129 09:18:32.978785  354652 start.go:256] writing updated cluster config ...
	I1129 09:18:32.979099  354652 ssh_runner.go:195] Run: rm -f paused
	I1129 09:18:33.029926  354652 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:18:33.032684  354652 out.go:179] * Done! kubectl is now configured to use "newest-cni-020433" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.571748272Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-nqwzp/POD" id=f9abdb11-2ad5-4a5d-9915-a6ea78774a31 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.57180103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.572991207Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.573821374Z" level=info msg="Ran pod sandbox dd7cc21d3b639aebcdf23170aee53b8d2093fd875b54b274e5ff42839b1a0024 with infra container: kube-system/kindnet-gxgwn/POD" id=b6d7b83e-a9c7-490b-a966-5086bc6a730c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.5751531Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a07a1d19-7ef7-4744-beb0-3964b72fde1b name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.575732231Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f9abdb11-2ad5-4a5d-9915-a6ea78774a31 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.576525063Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f4d84ca1-27a2-4541-8b8d-1c77c719af18 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.577525258Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.577724164Z" level=info msg="Creating container: kube-system/kindnet-gxgwn/kindnet-cni" id=3b8f28a4-8545-4309-8425-abac057bca74 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.577821842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.578483435Z" level=info msg="Ran pod sandbox 9c813698072e9895f5566d154b5a0e5b3d792c34341f479821cb6027ce0f7a5c with infra container: kube-system/kube-proxy-nqwzp/POD" id=f9abdb11-2ad5-4a5d-9915-a6ea78774a31 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.579468393Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=61022e37-970b-42e7-aa9f-b049c0877fa2 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.581348164Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=976f6058-11a4-4f77-ba83-fe75e6d222f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.582168216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.583341296Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.583446868Z" level=info msg="Creating container: kube-system/kube-proxy-nqwzp/kube-proxy" id=6e898d0a-4e31-4478-9a21-1c4114506804 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.583602218Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.588739214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.589349278Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.617144868Z" level=info msg="Created container 1466ac1e233eec9fe67dd8a0194308e6dedef82d9e2679668d0e7f5cfdf904be: kube-system/kindnet-gxgwn/kindnet-cni" id=3b8f28a4-8545-4309-8425-abac057bca74 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.617883707Z" level=info msg="Starting container: 1466ac1e233eec9fe67dd8a0194308e6dedef82d9e2679668d0e7f5cfdf904be" id=382d9f92-ed18-48fc-b7f9-297a161c8ba9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.619936016Z" level=info msg="Started container" PID=1056 containerID=1466ac1e233eec9fe67dd8a0194308e6dedef82d9e2679668d0e7f5cfdf904be description=kube-system/kindnet-gxgwn/kindnet-cni id=382d9f92-ed18-48fc-b7f9-297a161c8ba9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd7cc21d3b639aebcdf23170aee53b8d2093fd875b54b274e5ff42839b1a0024
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.620601192Z" level=info msg="Created container c812c112070adb5b5f9bdaf23aee437cad110ec9b1047b6413e4de4a8bef25e0: kube-system/kube-proxy-nqwzp/kube-proxy" id=6e898d0a-4e31-4478-9a21-1c4114506804 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.62133129Z" level=info msg="Starting container: c812c112070adb5b5f9bdaf23aee437cad110ec9b1047b6413e4de4a8bef25e0" id=7b620ca1-6671-4135-8eeb-c40006ac4cab name=/runtime.v1.RuntimeService/StartContainer
	Nov 29 09:18:32 newest-cni-020433 crio[526]: time="2025-11-29T09:18:32.624931873Z" level=info msg="Started container" PID=1057 containerID=c812c112070adb5b5f9bdaf23aee437cad110ec9b1047b6413e4de4a8bef25e0 description=kube-system/kube-proxy-nqwzp/kube-proxy id=7b620ca1-6671-4135-8eeb-c40006ac4cab name=/runtime.v1.RuntimeService/StartContainer sandboxID=9c813698072e9895f5566d154b5a0e5b3d792c34341f479821cb6027ce0f7a5c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c812c112070ad       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   9c813698072e9       kube-proxy-nqwzp                            kube-system
	1466ac1e233ee       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   dd7cc21d3b639       kindnet-gxgwn                               kube-system
	5d737be9886c4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   770d7ad5d91d9       kube-apiserver-newest-cni-020433            kube-system
	cc99378f3afa1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   453a0795c6b25       kube-scheduler-newest-cni-020433            kube-system
	78d9ae9dc233e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   5e1d74100a901       etcd-newest-cni-020433                      kube-system
	38ea4a65fa801       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   95b60c2388bfc       kube-controller-manager-newest-cni-020433   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-020433
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-020433
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=newest-cni-020433
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_17_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:17:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-020433
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:18:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:18:31 +0000   Sat, 29 Nov 2025 09:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:18:31 +0000   Sat, 29 Nov 2025 09:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:18:31 +0000   Sat, 29 Nov 2025 09:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 29 Nov 2025 09:18:31 +0000   Sat, 29 Nov 2025 09:17:51 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-020433
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                9e13478b-5cce-4854-b5e2-d069a5e427ce
	  Boot ID:                    5b66e152-4b71-4129-a911-cefbf4861c86
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-020433                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-gxgwn                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      37s
	  kube-system                 kube-apiserver-newest-cni-020433             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-newest-cni-020433    200m (2%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-nqwzp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-scheduler-newest-cni-020433             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 37s                kube-proxy       
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  Starting                 48s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node newest-cni-020433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node newest-cni-020433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node newest-cni-020433 status is now: NodeHasSufficientPID
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s                kubelet          Node newest-cni-020433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s                kubelet          Node newest-cni-020433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s                kubelet          Node newest-cni-020433 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node newest-cni-020433 event: Registered Node newest-cni-020433 in Controller
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-020433 event: Registered Node newest-cni-020433 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 e1 87 e4 84 ae 08 06
	[  +0.002640] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[ +14.778105] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +13.520989] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 86 32 aa eb 52 77 08 06
	[  +0.000371] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 56 8a 29 c7 2a 6f 08 06
	[ +14.762906] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 df ae 93 da f6 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a c7 6c 3e 12 d1 08 06
	[Nov29 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[ +13.297626] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	[  +7.390906] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 68 f3 3d 3c e9 08 06
	[  +0.000436] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a3 ac f9 5c c0 08 06
	[  +7.145922] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ea c3 2f 2a a3 01 08 06
	[  +0.000415] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e2 33 ff 54 aa a1 08 06
	
	
	==> etcd [78d9ae9dc233e43c0a5758db285e1a9283d698f9ec56f9f3e6086457bf96931f] <==
	{"level":"warn","ts":"2025-11-29T09:18:30.800386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.811239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.821207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.828770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.835174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.842135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.850209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.856557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.862561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.869688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.880009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.887635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.893962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.901099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.908377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.914556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.929702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.937262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.944653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.951231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.957560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.981829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.989251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:30.996692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:18:31.049514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58594","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:39 up  1:01,  0 user,  load average: 2.10, 3.37, 2.43
	Linux newest-cni-020433 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1466ac1e233eec9fe67dd8a0194308e6dedef82d9e2679668d0e7f5cfdf904be] <==
	I1129 09:18:32.896434       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:18:32.896712       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 09:18:32.896875       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:18:32.896892       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:18:32.896913       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:18:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:18:33.098498       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:18:33.098532       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:18:33.098544       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:18:33.099663       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:18:33.419143       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:18:33.419181       1 metrics.go:72] Registering metrics
	I1129 09:18:33.419257       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [5d737be9886c464ab2ee4b01f6470c5147ee1d043d43b8028fca68dff34978c1] <==
	I1129 09:18:31.519730       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1129 09:18:31.519888       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:18:31.520355       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1129 09:18:31.520440       1 aggregator.go:171] initial CRD sync complete...
	I1129 09:18:31.520488       1 autoregister_controller.go:144] Starting autoregister controller
	I1129 09:18:31.520514       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 09:18:31.520542       1 cache.go:39] Caches are synced for autoregister controller
	I1129 09:18:31.520631       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 09:18:31.525734       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1129 09:18:31.529632       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 09:18:31.546217       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:18:31.549298       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:18:31.770631       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:18:31.802288       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:18:31.823518       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:18:31.833438       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:18:31.840442       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:18:31.879218       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.66.86"}
	I1129 09:18:31.892682       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.176.109"}
	I1129 09:18:32.422788       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:18:35.240235       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:18:35.240291       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:18:35.291047       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:18:35.440163       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:18:35.491957       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [38ea4a65fa801144421a1e26140edce1f295aaa42e72beef2ceab2847588e64c] <==
	I1129 09:18:34.875987       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:18:34.887630       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 09:18:34.887648       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:18:34.887666       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1129 09:18:34.887693       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 09:18:34.887695       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 09:18:34.887700       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:18:34.887777       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 09:18:34.887780       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 09:18:34.887777       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 09:18:34.889609       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 09:18:34.892337       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:18:34.893348       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1129 09:18:34.893454       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:18:34.895610       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 09:18:34.898883       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:18:34.898985       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:18:34.899052       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-020433"
	I1129 09:18:34.899104       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1129 09:18:34.902700       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:18:34.905124       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:18:34.912203       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:18:34.912263       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:18:34.912271       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:18:34.912279       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c812c112070adb5b5f9bdaf23aee437cad110ec9b1047b6413e4de4a8bef25e0] <==
	I1129 09:18:32.661314       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:18:32.731687       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:18:32.832545       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:18:32.832586       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 09:18:32.832664       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:18:32.851511       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:18:32.851569       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:18:32.856774       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:18:32.857256       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:18:32.857295       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:18:32.858703       1 config.go:200] "Starting service config controller"
	I1129 09:18:32.858717       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:18:32.858727       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:18:32.858735       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:18:32.858811       1 config.go:309] "Starting node config controller"
	I1129 09:18:32.858824       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:18:32.858804       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:18:32.858949       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:18:32.959003       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:18:32.959057       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:18:32.959077       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:18:32.959122       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [cc99378f3afa1b471c013ab021cd9149dfe1e247b71c547969ec37147062fe7a] <==
	I1129 09:18:30.325109       1 serving.go:386] Generated self-signed cert in-memory
	I1129 09:18:31.482595       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 09:18:31.482686       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:18:31.487159       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1129 09:18:31.487220       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1129 09:18:31.487316       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:18:31.487316       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:18:31.487350       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:18:31.487338       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:18:31.487876       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:18:31.487927       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 09:18:31.587764       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:18:31.587780       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1129 09:18:31.587900       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: E1129 09:18:31.301631     681 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-020433\" not found" node="newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: E1129 09:18:31.302166     681 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-020433\" not found" node="newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: E1129 09:18:31.302343     681 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-020433\" not found" node="newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.465059     681 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.544304     681 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.544426     681 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.544459     681 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.545431     681 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: E1129 09:18:31.580901     681 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-020433\" already exists" pod="kube-system/kube-apiserver-newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.580943     681 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: E1129 09:18:31.587988     681 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-020433\" already exists" pod="kube-system/kube-controller-manager-newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.588036     681 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: E1129 09:18:31.595049     681 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-020433\" already exists" pod="kube-system/kube-scheduler-newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: I1129 09:18:31.595096     681 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-020433"
	Nov 29 09:18:31 newest-cni-020433 kubelet[681]: E1129 09:18:31.602320     681 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-020433\" already exists" pod="kube-system/etcd-newest-cni-020433"
	Nov 29 09:18:32 newest-cni-020433 kubelet[681]: I1129 09:18:32.260691     681 apiserver.go:52] "Watching apiserver"
	Nov 29 09:18:32 newest-cni-020433 kubelet[681]: I1129 09:18:32.265591     681 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 29 09:18:32 newest-cni-020433 kubelet[681]: I1129 09:18:32.357930     681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/118d6bdc-5c33-4ab5-bee8-6f8a3447c461-xtables-lock\") pod \"kube-proxy-nqwzp\" (UID: \"118d6bdc-5c33-4ab5-bee8-6f8a3447c461\") " pod="kube-system/kube-proxy-nqwzp"
	Nov 29 09:18:32 newest-cni-020433 kubelet[681]: I1129 09:18:32.357988     681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e13d750-7bcf-4e2a-9663-512ecc23781a-xtables-lock\") pod \"kindnet-gxgwn\" (UID: \"7e13d750-7bcf-4e2a-9663-512ecc23781a\") " pod="kube-system/kindnet-gxgwn"
	Nov 29 09:18:32 newest-cni-020433 kubelet[681]: I1129 09:18:32.358067     681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/118d6bdc-5c33-4ab5-bee8-6f8a3447c461-lib-modules\") pod \"kube-proxy-nqwzp\" (UID: \"118d6bdc-5c33-4ab5-bee8-6f8a3447c461\") " pod="kube-system/kube-proxy-nqwzp"
	Nov 29 09:18:32 newest-cni-020433 kubelet[681]: I1129 09:18:32.358099     681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7e13d750-7bcf-4e2a-9663-512ecc23781a-cni-cfg\") pod \"kindnet-gxgwn\" (UID: \"7e13d750-7bcf-4e2a-9663-512ecc23781a\") " pod="kube-system/kindnet-gxgwn"
	Nov 29 09:18:32 newest-cni-020433 kubelet[681]: I1129 09:18:32.358120     681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e13d750-7bcf-4e2a-9663-512ecc23781a-lib-modules\") pod \"kindnet-gxgwn\" (UID: \"7e13d750-7bcf-4e2a-9663-512ecc23781a\") " pod="kube-system/kindnet-gxgwn"
	Nov 29 09:18:34 newest-cni-020433 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 29 09:18:34 newest-cni-020433 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 29 09:18:34 newest-cni-020433 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-020433 -n newest-cni-020433
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-020433 -n newest-cni-020433: exit status 2 (335.277685ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-020433 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-h8nqv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7q86r kubernetes-dashboard-855c9754f9-2n8qz
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-020433 describe pod coredns-66bc5c9577-h8nqv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7q86r kubernetes-dashboard-855c9754f9-2n8qz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-020433 describe pod coredns-66bc5c9577-h8nqv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7q86r kubernetes-dashboard-855c9754f9-2n8qz: exit status 1 (62.21609ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-h8nqv" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-7q86r" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-2n8qz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-020433 describe pod coredns-66bc5c9577-h8nqv storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7q86r kubernetes-dashboard-855c9754f9-2n8qz: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.05s)

                                                
                                    

Test pass (263/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.93
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.53
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.41
21 TestBinaryMirror 0.81
22 TestOffline 87.8
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 124.86
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 9.45
48 TestAddons/StoppedEnableDisable 16.72
49 TestCertOptions 30.43
50 TestCertExpiration 207.97
52 TestForceSystemdFlag 43.33
53 TestForceSystemdEnv 30.1
58 TestErrorSpam/setup 19.71
59 TestErrorSpam/start 0.68
60 TestErrorSpam/status 0.95
61 TestErrorSpam/pause 6.5
62 TestErrorSpam/unpause 6.46
63 TestErrorSpam/stop 8.12
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 36.4
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.16
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.12
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.47
75 TestFunctional/serial/CacheCmd/cache/add_local 0.79
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 42.03
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.23
86 TestFunctional/serial/LogsFileCmd 1.24
87 TestFunctional/serial/InvalidService 4.39
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 6.97
91 TestFunctional/parallel/DryRun 0.47
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 1.17
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 22.03
101 TestFunctional/parallel/SSHCmd 0.71
102 TestFunctional/parallel/CpCmd 1.66
103 TestFunctional/parallel/MySQL 14.94
104 TestFunctional/parallel/FileSync 0.38
105 TestFunctional/parallel/CertSync 1.98
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
113 TestFunctional/parallel/License 0.32
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.88
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 2.77
121 TestFunctional/parallel/ImageCommands/Setup 0.43
122 TestFunctional/parallel/Version/short 0.08
123 TestFunctional/parallel/Version/components 0.71
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.22
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
129 TestFunctional/parallel/ProfileCmd/profile_list 0.54
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
147 TestFunctional/parallel/MountCmd/any-port 6.97
148 TestFunctional/parallel/MountCmd/specific-port 1.8
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
150 TestFunctional/parallel/ServiceCmd/List 1.72
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 140.7
163 TestMultiControlPlane/serial/DeployApp 5.4
164 TestMultiControlPlane/serial/PingHostFromPods 1.03
165 TestMultiControlPlane/serial/AddWorkerNode 22.86
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
168 TestMultiControlPlane/serial/CopyFile 17.31
169 TestMultiControlPlane/serial/StopSecondaryNode 19.8
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
171 TestMultiControlPlane/serial/RestartSecondaryNode 14.72
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.92
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 105.02
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.57
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
176 TestMultiControlPlane/serial/StopCluster 41.6
177 TestMultiControlPlane/serial/RestartCluster 56.31
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
179 TestMultiControlPlane/serial/AddSecondaryNode 41.3
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
185 TestJSONOutput/start/Command 41.66
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.95
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 27.95
211 TestKicCustomNetwork/use_default_bridge_network 22.95
212 TestKicExistingNetwork 26.64
213 TestKicCustomSubnet 22.85
214 TestKicStaticIP 23.63
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 47.68
219 TestMountStart/serial/StartWithMountFirst 7.67
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 4.99
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.68
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.25
226 TestMountStart/serial/RestartStopped 7.29
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 68.02
231 TestMultiNode/serial/DeployApp2Nodes 3.8
232 TestMultiNode/serial/PingHostFrom2Pods 0.72
233 TestMultiNode/serial/AddNode 25.52
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.66
236 TestMultiNode/serial/CopyFile 9.98
237 TestMultiNode/serial/StopNode 2.27
238 TestMultiNode/serial/StartAfterStop 7.2
239 TestMultiNode/serial/RestartKeepsNodes 78.9
240 TestMultiNode/serial/DeleteNode 5.25
241 TestMultiNode/serial/StopMultiNode 28.5
242 TestMultiNode/serial/RestartMultiNode 26.85
243 TestMultiNode/serial/ValidateNameConflict 22.56
248 TestPreload 100.02
250 TestScheduledStopUnix 94.35
253 TestInsufficientStorage 12.34
254 TestRunningBinaryUpgrade 296
256 TestKubernetesUpgrade 302.5
257 TestMissingContainerUpgrade 85.64
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 39.19
261 TestNoKubernetes/serial/StartWithStopK8s 23.65
262 TestNoKubernetes/serial/Start 6.4
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
265 TestNoKubernetes/serial/ProfileList 1.94
266 TestNoKubernetes/serial/Stop 1.29
267 TestNoKubernetes/serial/StartNoArgs 7.54
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
276 TestNetworkPlugins/group/false 5.15
280 TestStoppedBinaryUpgrade/Setup 0.64
281 TestStoppedBinaryUpgrade/Upgrade 286.46
290 TestPause/serial/Start 41.65
291 TestPause/serial/SecondStartNoReconfiguration 6.16
293 TestNetworkPlugins/group/auto/Start 41.54
294 TestNetworkPlugins/group/auto/KubeletFlags 0.29
295 TestNetworkPlugins/group/auto/NetCatPod 8.19
296 TestNetworkPlugins/group/auto/DNS 0.11
297 TestNetworkPlugins/group/auto/Localhost 0.09
298 TestNetworkPlugins/group/auto/HairPin 0.1
299 TestNetworkPlugins/group/kindnet/Start 40.01
300 TestNetworkPlugins/group/calico/Start 52.68
301 TestStoppedBinaryUpgrade/MinikubeLogs 1.11
302 TestNetworkPlugins/group/custom-flannel/Start 54.61
303 TestNetworkPlugins/group/enable-default-cni/Start 61.55
304 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
305 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
306 TestNetworkPlugins/group/kindnet/NetCatPod 8.26
307 TestNetworkPlugins/group/kindnet/DNS 0.11
308 TestNetworkPlugins/group/kindnet/Localhost 0.1
309 TestNetworkPlugins/group/kindnet/HairPin 0.13
310 TestNetworkPlugins/group/calico/ControllerPod 6.01
311 TestNetworkPlugins/group/flannel/Start 51.73
312 TestNetworkPlugins/group/calico/KubeletFlags 0.34
313 TestNetworkPlugins/group/calico/NetCatPod 9.24
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.25
316 TestNetworkPlugins/group/calico/DNS 0.12
317 TestNetworkPlugins/group/calico/Localhost 0.1
318 TestNetworkPlugins/group/calico/HairPin 0.1
319 TestNetworkPlugins/group/custom-flannel/DNS 0.14
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.27
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
327 TestNetworkPlugins/group/bridge/Start 32.68
329 TestStartStop/group/old-k8s-version/serial/FirstStart 54.61
330 TestNetworkPlugins/group/flannel/ControllerPod 6.01
332 TestStartStop/group/no-preload/serial/FirstStart 54.1
333 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
334 TestNetworkPlugins/group/flannel/NetCatPod 10.22
335 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
336 TestNetworkPlugins/group/bridge/NetCatPod 9.21
337 TestNetworkPlugins/group/flannel/DNS 0.16
338 TestNetworkPlugins/group/flannel/Localhost 0.12
339 TestNetworkPlugins/group/flannel/HairPin 0.1
340 TestNetworkPlugins/group/bridge/DNS 0.12
341 TestNetworkPlugins/group/bridge/Localhost 0.1
342 TestNetworkPlugins/group/bridge/HairPin 0.11
344 TestStartStop/group/embed-certs/serial/FirstStart 44.29
345 TestStartStop/group/old-k8s-version/serial/DeployApp 8.36
347 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 37.86
349 TestStartStop/group/old-k8s-version/serial/Stop 16.08
350 TestStartStop/group/no-preload/serial/DeployApp 8.28
352 TestStartStop/group/no-preload/serial/Stop 16.25
353 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
354 TestStartStop/group/old-k8s-version/serial/SecondStart 51.85
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
356 TestStartStop/group/embed-certs/serial/DeployApp 7.26
357 TestStartStop/group/no-preload/serial/SecondStart 47.53
358 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.27
361 TestStartStop/group/embed-certs/serial/Stop 18.89
362 TestStartStop/group/default-k8s-diff-port/serial/Stop 17.01
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
364 TestStartStop/group/embed-certs/serial/SecondStart 49.63
365 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
366 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 43.81
367 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
368 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
369 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
370 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
373 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
376 TestStartStop/group/newest-cni/serial/FirstStart 29.69
377 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
378 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
380 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
382 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
383 TestStartStop/group/newest-cni/serial/DeployApp 0
385 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
387 TestStartStop/group/newest-cni/serial/Stop 17.97
388 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
389 TestStartStop/group/newest-cni/serial/SecondStart 10.53
390 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
x
+
TestDownloadOnly/v1.28.0/json-events (4.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-557052 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-557052 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.924719171s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1129 08:28:14.801722    9216 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1129 08:28:14.801818    9216 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-557052
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-557052: exit status 85 (72.969251ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-557052 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-557052 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 08:28:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 08:28:09.927185    9228 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:28:09.927494    9228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:09.927504    9228 out.go:374] Setting ErrFile to fd 2...
	I1129 08:28:09.927508    9228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:09.927703    9228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	W1129 08:28:09.927819    9228 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22000-5652/.minikube/config/config.json: open /home/jenkins/minikube-integration/22000-5652/.minikube/config/config.json: no such file or directory
	I1129 08:28:09.928306    9228 out.go:368] Setting JSON to true
	I1129 08:28:09.929166    9228 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":642,"bootTime":1764404248,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:28:09.929226    9228 start.go:143] virtualization: kvm guest
	I1129 08:28:09.932806    9228 out.go:99] [download-only-557052] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 08:28:09.932954    9228 notify.go:221] Checking for updates...
	W1129 08:28:09.932970    9228 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball: no such file or directory
	I1129 08:28:09.934305    9228 out.go:171] MINIKUBE_LOCATION=22000
	I1129 08:28:09.935609    9228 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:28:09.936999    9228 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 08:28:09.938446    9228 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 08:28:09.939767    9228 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1129 08:28:09.942111    9228 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1129 08:28:09.942383    9228 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:28:09.969177    9228 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 08:28:09.969243    9228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:28:10.355605    9228 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-11-29 08:28:10.34640179 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:28:10.355700    9228 docker.go:319] overlay module found
	I1129 08:28:10.357161    9228 out.go:99] Using the docker driver based on user configuration
	I1129 08:28:10.357190    9228 start.go:309] selected driver: docker
	I1129 08:28:10.357200    9228 start.go:927] validating driver "docker" against <nil>
	I1129 08:28:10.357284    9228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:28:10.420270    9228 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-11-29 08:28:10.410807334 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:28:10.420448    9228 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 08:28:10.420991    9228 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1129 08:28:10.421179    9228 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1129 08:28:10.422917    9228 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-557052 host does not exist
	  To start a cluster, run: "minikube start -p download-only-557052"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-557052
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-557986 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-557986 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.533544457s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1129 08:28:18.770500    9216 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1129 08:28:18.770540    9216 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-557986
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-557986: exit status 85 (72.797478ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-557052 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-557052 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-557052                                                                                                                                                   │ download-only-557052 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-557986 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-557986 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 08:28:15
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 08:28:15.288880    9588 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:28:15.288970    9588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:15.288974    9588 out.go:374] Setting ErrFile to fd 2...
	I1129 08:28:15.288979    9588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:15.289610    9588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:28:15.290061    9588 out.go:368] Setting JSON to true
	I1129 08:28:15.290873    9588 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":647,"bootTime":1764404248,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:28:15.290926    9588 start.go:143] virtualization: kvm guest
	I1129 08:28:15.292597    9588 out.go:99] [download-only-557986] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 08:28:15.292727    9588 notify.go:221] Checking for updates...
	I1129 08:28:15.293810    9588 out.go:171] MINIKUBE_LOCATION=22000
	I1129 08:28:15.295080    9588 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:28:15.296433    9588 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 08:28:15.298589    9588 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 08:28:15.300094    9588 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1129 08:28:15.302104    9588 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1129 08:28:15.302279    9588 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:28:15.324854    9588 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 08:28:15.324963    9588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:28:15.379587    9588 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-29 08:28:15.370088604 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:28:15.379692    9588 docker.go:319] overlay module found
	I1129 08:28:15.381307    9588 out.go:99] Using the docker driver based on user configuration
	I1129 08:28:15.381366    9588 start.go:309] selected driver: docker
	I1129 08:28:15.381378    9588 start.go:927] validating driver "docker" against <nil>
	I1129 08:28:15.381473    9588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:28:15.436533    9588 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-29 08:28:15.428089592 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:28:15.436719    9588 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 08:28:15.437215    9588 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1129 08:28:15.437348    9588 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1129 08:28:15.438970    9588 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-557986 host does not exist
	  To start a cluster, run: "minikube start -p download-only-557986"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-557986
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-659543 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-659543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-659543
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I1129 08:28:19.896419    9216 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-932462 --alsologtostderr --binary-mirror http://127.0.0.1:42911 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-932462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-932462
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (87.8s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-121786 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-121786 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m24.459088432s)
helpers_test.go:175: Cleaning up "offline-crio-121786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-121786
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-121786: (3.342499449s)
--- PASS: TestOffline (87.80s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-053273
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-053273: exit status 85 (64.694906ms)

                                                
                                                
-- stdout --
	* Profile "addons-053273" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-053273"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-053273
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-053273: exit status 85 (64.491127ms)

                                                
                                                
-- stdout --
	* Profile "addons-053273" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-053273"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (124.86s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-053273 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-053273 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m4.857370202s)
--- PASS: TestAddons/Setup (124.86s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-053273 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-053273 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.45s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-053273 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-053273 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [985cf3af-eaa2-4b5b-a465-7777bcef18d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [985cf3af-eaa2-4b5b-a465-7777bcef18d9] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003182578s
addons_test.go:694: (dbg) Run:  kubectl --context addons-053273 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-053273 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-053273 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.72s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-053273
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-053273: (16.432872351s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-053273
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-053273
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-053273
--- PASS: TestAddons/StoppedEnableDisable (16.72s)

                                                
                                    
x
+
TestCertOptions (30.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-207443 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-207443 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (26.923988452s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-207443 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-207443 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-207443 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-207443" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-207443
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-207443: (2.709006512s)
--- PASS: TestCertOptions (30.43s)

                                                
                                    
x
+
TestCertExpiration (207.97s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-836438 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-836438 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (20.138383505s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-836438 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1129 09:10:50.591737    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-836438 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.326574464s)
helpers_test.go:175: Cleaning up "cert-expiration-836438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-836438
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-836438: (2.501763247s)
--- PASS: TestCertExpiration (207.97s)

                                                
                                    
x
+
TestForceSystemdFlag (43.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-182459 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-182459 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.275253474s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-182459 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-182459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-182459
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-182459: (2.647824251s)
--- PASS: TestForceSystemdFlag (43.33s)

                                                
                                    
x
+
TestForceSystemdEnv (30.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-076374 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-076374 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.383451167s)
helpers_test.go:175: Cleaning up "force-systemd-env-076374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-076374
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-076374: (4.713516302s)
--- PASS: TestForceSystemdEnv (30.10s)

                                                
                                    
x
+
TestErrorSpam/setup (19.71s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-876529 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-876529 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-876529 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-876529 --driver=docker  --container-runtime=crio: (19.714215668s)
--- PASS: TestErrorSpam/setup (19.71s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (6.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 pause: exit status 80 (2.139647866s)

                                                
                                                
-- stdout --
	* Pausing node nospam-876529 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:33:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 pause: exit status 80 (1.979566541s)

                                                
                                                
-- stdout --
	* Pausing node nospam-876529 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:33:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 pause: exit status 80 (2.378757692s)

                                                
                                                
-- stdout --
	* Pausing node nospam-876529 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:33:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.46s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 unpause: exit status 80 (2.255489147s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-876529 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:33:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 unpause: exit status 80 (1.90934635s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-876529 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:33:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 unpause: exit status 80 (2.289631586s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-876529 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-29T08:34:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.46s)

                                                
                                    
x
+
TestErrorSpam/stop (8.12s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 stop: (7.913288266s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-876529 --log_dir /tmp/nospam-876529 stop
--- PASS: TestErrorSpam/stop (8.12s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22000-5652/.minikube/files/etc/test/nested/copy/9216/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (36.4s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-137675 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-137675 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (36.394678567s)
--- PASS: TestFunctional/serial/StartWithProxy (36.40s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.16s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1129 08:34:49.224839    9216 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-137675 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-137675 --alsologtostderr -v=8: (6.15776555s)
functional_test.go:678: soft start took 6.158718968s for "functional-137675" cluster.
I1129 08:34:55.383191    9216 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.16s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-137675 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-137675 /tmp/TestFunctionalserialCacheCmdcacheadd_local2471732081/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 cache add minikube-local-cache-test:functional-137675
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 cache delete minikube-local-cache-test:functional-137675
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-137675
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-137675 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.400931ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 kubectl -- --context functional-137675 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-137675 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-137675 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1129 08:35:26.237998    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:35:26.244455    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:35:26.255946    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:35:26.277455    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:35:26.318935    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:35:26.400393    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:35:26.562215    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:35:26.883955    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:35:27.525803    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:35:28.807563    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:35:31.370530    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:35:36.492047    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-137675 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.025716026s)
functional_test.go:776: restart took 42.025838471s for "functional-137675" cluster.
I1129 08:35:43.175387    9216 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (42.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-137675 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-137675 logs: (1.230605083s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 logs --file /tmp/TestFunctionalserialLogsFileCmd2191100452/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-137675 logs --file /tmp/TestFunctionalserialLogsFileCmd2191100452/001/logs.txt: (1.234433634s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-137675 apply -f testdata/invalidsvc.yaml
E1129 08:35:46.733394    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-137675
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-137675: exit status 115 (347.850435ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30858 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-137675 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-137675 config get cpus: exit status 14 (102.566529ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-137675 config get cpus: exit status 14 (75.217291ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-137675 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-137675 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 46760: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.97s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-137675 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-137675 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (195.598662ms)

                                                
                                                
-- stdout --
	* [functional-137675] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:36:00.240881   43852 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:36:00.241034   43852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:36:00.241048   43852 out.go:374] Setting ErrFile to fd 2...
	I1129 08:36:00.241055   43852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:36:00.241411   43852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:36:00.242007   43852 out.go:368] Setting JSON to false
	I1129 08:36:00.243051   43852 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1112,"bootTime":1764404248,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:36:00.243110   43852 start.go:143] virtualization: kvm guest
	I1129 08:36:00.245445   43852 out.go:179] * [functional-137675] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 08:36:00.246860   43852 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 08:36:00.246874   43852 notify.go:221] Checking for updates...
	I1129 08:36:00.249350   43852 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:36:00.250631   43852 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 08:36:00.251983   43852 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 08:36:00.253777   43852 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 08:36:00.254981   43852 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 08:36:00.257216   43852 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:36:00.258013   43852 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:36:00.289667   43852 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 08:36:00.289782   43852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:36:00.359674   43852 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-29 08:36:00.348235035 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:36:00.359817   43852 docker.go:319] overlay module found
	I1129 08:36:00.362737   43852 out.go:179] * Using the docker driver based on existing profile
	I1129 08:36:00.363968   43852 start.go:309] selected driver: docker
	I1129 08:36:00.363996   43852 start.go:927] validating driver "docker" against &{Name:functional-137675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-137675 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:36:00.364098   43852 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 08:36:00.366004   43852 out.go:203] 
	W1129 08:36:00.367080   43852 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1129 08:36:00.368201   43852 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-137675 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-137675 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-137675 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (194.332129ms)

                                                
                                                
-- stdout --
	* [functional-137675] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:36:00.049604   43767 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:36:00.049880   43767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:36:00.049891   43767 out.go:374] Setting ErrFile to fd 2...
	I1129 08:36:00.049896   43767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:36:00.050227   43767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:36:00.050665   43767 out.go:368] Setting JSON to false
	I1129 08:36:00.051650   43767 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1112,"bootTime":1764404248,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:36:00.051707   43767 start.go:143] virtualization: kvm guest
	I1129 08:36:00.053644   43767 out.go:179] * [functional-137675] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1129 08:36:00.054888   43767 notify.go:221] Checking for updates...
	I1129 08:36:00.054908   43767 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 08:36:00.056135   43767 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:36:00.057484   43767 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 08:36:00.058615   43767 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 08:36:00.060058   43767 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 08:36:00.061137   43767 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 08:36:00.062639   43767 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:36:00.063655   43767 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:36:00.090663   43767 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 08:36:00.090818   43767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:36:00.164687   43767 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-29 08:36:00.152491044 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:36:00.164874   43767 docker.go:319] overlay module found
	I1129 08:36:00.167550   43767 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1129 08:36:00.168838   43767 start.go:309] selected driver: docker
	I1129 08:36:00.168865   43767 start.go:927] validating driver "docker" against &{Name:functional-137675 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-137675 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:36:00.168987   43767 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 08:36:00.171328   43767 out.go:203] 
	W1129 08:36:00.172615   43767 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1129 08:36:00.173711   43767 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [1e54e4fe-98e5-4999-971c-021f945aa412] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003530966s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-137675 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-137675 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-137675 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-137675 apply -f testdata/storage-provisioner/pod.yaml
I1129 08:35:58.691698    9216 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f069cb5b-e44b-4c05-b6bb-5b5da35bdbe6] Pending
helpers_test.go:352: "sp-pod" [f069cb5b-e44b-4c05-b6bb-5b5da35bdbe6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [f069cb5b-e44b-4c05-b6bb-5b5da35bdbe6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003759604s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-137675 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-137675 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-137675 apply -f testdata/storage-provisioner/pod.yaml
I1129 08:36:08.265432    9216 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [31ce8601-3541-4598-88ef-48566260db2e] Pending
helpers_test.go:352: "sp-pod" [31ce8601-3541-4598-88ef-48566260db2e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003972207s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-137675 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh -n functional-137675 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 cp functional-137675:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd582501262/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh -n functional-137675 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh -n functional-137675 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (14.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-137675 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
2025/11/29 08:36:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "mysql-5bb876957f-ml9z2" [bb988b5c-0473-44bb-b36d-566c27a69a34] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-ml9z2" [bb988b5c-0473-44bb-b36d-566c27a69a34] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 12.003143779s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-137675 exec mysql-5bb876957f-ml9z2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-137675 exec mysql-5bb876957f-ml9z2 -- mysql -ppassword -e "show databases;": exit status 1 (82.488834ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1129 08:36:29.441749    9216 retry.go:31] will retry after 1.331666171s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-137675 exec mysql-5bb876957f-ml9z2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-137675 exec mysql-5bb876957f-ml9z2 -- mysql -ppassword -e "show databases;": exit status 1 (85.327742ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1129 08:36:30.859097    9216 retry.go:31] will retry after 1.196236173s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-137675 exec mysql-5bb876957f-ml9z2 -- mysql -ppassword -e "show databases;"
E1129 08:36:48.176231    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:38:10.097703    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:40:26.237937    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:40:53.939139    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:45:26.237375    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (14.94s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9216/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "sudo cat /etc/test/nested/copy/9216/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9216.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "sudo cat /etc/ssl/certs/9216.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9216.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "sudo cat /usr/share/ca-certificates/9216.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/92162.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "sudo cat /etc/ssl/certs/92162.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/92162.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "sudo cat /usr/share/ca-certificates/92162.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-137675 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-137675 ssh "sudo systemctl is-active docker": exit status 1 (370.473075ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-137675 ssh "sudo systemctl is-active containerd": exit status 1 (348.084257ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-137675 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-137675 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-137675 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-137675 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 41189: os: process already finished
helpers_test.go:519: unable to terminate pid 40830: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-137675 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-137675 image ls --format table --alsologtostderr:
I1129 08:36:23.098647   48796 out.go:360] Setting OutFile to fd 1 ...
I1129 08:36:23.098968   48796 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:36:23.098982   48796 out.go:374] Setting ErrFile to fd 2...
I1129 08:36:23.098988   48796 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:36:23.099286   48796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
I1129 08:36:23.100120   48796 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:36:23.100260   48796 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:36:23.100901   48796 cli_runner.go:164] Run: docker container inspect functional-137675 --format={{.State.Status}}
I1129 08:36:23.118559   48796 ssh_runner.go:195] Run: systemctl --version
I1129 08:36:23.118611   48796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-137675
I1129 08:36:23.138418   48796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/functional-137675/id_rsa Username:docker}
I1129 08:36:23.239530   48796 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-137675 image ls --format json --alsologtostderr:
[{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcdd
effadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9
b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe
0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441
e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"56cc512116c8f894f11ce1995460aef1
ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-137675 image ls --format json --alsologtostderr:
I1129 08:36:22.228459   48709 out.go:360] Setting OutFile to fd 1 ...
I1129 08:36:22.228570   48709 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:36:22.228576   48709 out.go:374] Setting ErrFile to fd 2...
I1129 08:36:22.228583   48709 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:36:22.228900   48709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
I1129 08:36:22.229646   48709 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:36:22.229783   48709 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:36:22.230502   48709 cli_runner.go:164] Run: docker container inspect functional-137675 --format={{.State.Status}}
I1129 08:36:22.254867   48709 ssh_runner.go:195] Run: systemctl --version
I1129 08:36:22.254925   48709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-137675
I1129 08:36:22.278438   48709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/functional-137675/id_rsa Username:docker}
I1129 08:36:22.388018   48709 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-137675 image ls --format yaml --alsologtostderr:
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-137675 image ls --format yaml --alsologtostderr:
I1129 08:36:23.330680   48878 out.go:360] Setting OutFile to fd 1 ...
I1129 08:36:23.330943   48878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:36:23.330952   48878 out.go:374] Setting ErrFile to fd 2...
I1129 08:36:23.330956   48878 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:36:23.331176   48878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
I1129 08:36:23.331733   48878 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:36:23.331823   48878 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:36:23.332226   48878 cli_runner.go:164] Run: docker container inspect functional-137675 --format={{.State.Status}}
I1129 08:36:23.350463   48878 ssh_runner.go:195] Run: systemctl --version
I1129 08:36:23.350529   48878 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-137675
I1129 08:36:23.369754   48878 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/functional-137675/id_rsa Username:docker}
I1129 08:36:23.469351   48878 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-137675 ssh pgrep buildkitd: exit status 1 (277.05681ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image build -t localhost/my-image:functional-137675 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-137675 image build -t localhost/my-image:functional-137675 testdata/build --alsologtostderr: (2.263336715s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-137675 image build -t localhost/my-image:functional-137675 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 96486cf8b5c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-137675
--> 2fb39be9cf6
Successfully tagged localhost/my-image:functional-137675
2fb39be9cf69122e7ca7caa2938b3f795e6ea65aee2c96369ca0cf627372046e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-137675 image build -t localhost/my-image:functional-137675 testdata/build --alsologtostderr:
I1129 08:36:23.840535   49107 out.go:360] Setting OutFile to fd 1 ...
I1129 08:36:23.840816   49107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:36:23.840826   49107 out.go:374] Setting ErrFile to fd 2...
I1129 08:36:23.840830   49107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:36:23.841051   49107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
I1129 08:36:23.841586   49107 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:36:23.842178   49107 config.go:182] Loaded profile config "functional-137675": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:36:23.842605   49107 cli_runner.go:164] Run: docker container inspect functional-137675 --format={{.State.Status}}
I1129 08:36:23.861242   49107 ssh_runner.go:195] Run: systemctl --version
I1129 08:36:23.861289   49107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-137675
I1129 08:36:23.879979   49107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/functional-137675/id_rsa Username:docker}
I1129 08:36:23.981613   49107 build_images.go:162] Building image from path: /tmp/build.2222125011.tar
I1129 08:36:23.981684   49107 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1129 08:36:23.989882   49107 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2222125011.tar
I1129 08:36:23.993517   49107 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2222125011.tar: stat -c "%s %y" /var/lib/minikube/build/build.2222125011.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2222125011.tar': No such file or directory
I1129 08:36:23.993556   49107 ssh_runner.go:362] scp /tmp/build.2222125011.tar --> /var/lib/minikube/build/build.2222125011.tar (3072 bytes)
I1129 08:36:24.011037   49107 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2222125011
I1129 08:36:24.018421   49107 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2222125011 -xf /var/lib/minikube/build/build.2222125011.tar
I1129 08:36:24.026434   49107 crio.go:315] Building image: /var/lib/minikube/build/build.2222125011
I1129 08:36:24.026497   49107 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-137675 /var/lib/minikube/build/build.2222125011 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1129 08:36:26.023220   49107 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-137675 /var/lib/minikube/build/build.2222125011 --cgroup-manager=cgroupfs: (1.996681743s)
I1129 08:36:26.023284   49107 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2222125011
I1129 08:36:26.031954   49107 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2222125011.tar
I1129 08:36:26.040048   49107 build_images.go:218] Built localhost/my-image:functional-137675 from /tmp/build.2222125011.tar
I1129 08:36:26.040087   49107 build_images.go:134] succeeded building to: functional-137675
I1129 08:36:26.040093   49107 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-137675
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-137675 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-137675 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [26691976-9976-4982-9d25-e00085936d1d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [26691976-9976-4982-9d25-e00085936d1d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003618049s
I1129 08:35:59.810747    9216 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "440.858295ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "94.335405ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "425.263681ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "82.950412ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image rm kicbase/echo-server:functional-137675 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-137675 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.120.83 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-137675 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-137675 /tmp/TestFunctionalparallelMountCmdany-port490140726/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764405360651266656" to /tmp/TestFunctionalparallelMountCmdany-port490140726/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764405360651266656" to /tmp/TestFunctionalparallelMountCmdany-port490140726/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764405360651266656" to /tmp/TestFunctionalparallelMountCmdany-port490140726/001/test-1764405360651266656
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-137675 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (331.937258ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1129 08:36:00.983645    9216 retry.go:31] will retry after 689.837242ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 29 08:36 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 29 08:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 29 08:36 test-1764405360651266656
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh cat /mount-9p/test-1764405360651266656
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-137675 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [25c36641-0380-439b-976e-bba5c1cb0bdc] Pending
helpers_test.go:352: "busybox-mount" [25c36641-0380-439b-976e-bba5c1cb0bdc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [25c36641-0380-439b-976e-bba5c1cb0bdc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [25c36641-0380-439b-976e-bba5c1cb0bdc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003606708s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-137675 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh stat /mount-9p/created-by-pod
E1129 08:36:07.214819    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-137675 /tmp/TestFunctionalparallelMountCmdany-port490140726/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-137675 /tmp/TestFunctionalparallelMountCmdspecific-port1229108312/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-137675 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.204369ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1129 08:36:07.924348    9216 retry.go:31] will retry after 459.300861ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-137675 /tmp/TestFunctionalparallelMountCmdspecific-port1229108312/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-137675 ssh "sudo umount -f /mount-9p": exit status 1 (275.577793ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-137675 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-137675 /tmp/TestFunctionalparallelMountCmdspecific-port1229108312/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-137675 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1742623872/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-137675 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1742623872/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-137675 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1742623872/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-137675 ssh "findmnt -T" /mount1: exit status 1 (339.003135ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1129 08:36:09.764743    9216 retry.go:31] will retry after 375.623484ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-137675 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-137675 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1742623872/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-137675 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1742623872/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-137675 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1742623872/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-137675 service list: (1.722307323s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-137675 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-137675 service list -o json: (1.701943771s)
functional_test.go:1504: Took "1.702031989s" to run "out/minikube-linux-amd64 -p functional-137675 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-137675
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-137675
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-137675
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (140.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-559225 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m19.950110144s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (140.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-559225 kubectl -- rollout status deployment/busybox: (3.467906279s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- exec busybox-7b57f96db7-56qx6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- exec busybox-7b57f96db7-hh9bp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- exec busybox-7b57f96db7-lk8kt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- exec busybox-7b57f96db7-56qx6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- exec busybox-7b57f96db7-hh9bp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- exec busybox-7b57f96db7-lk8kt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- exec busybox-7b57f96db7-56qx6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- exec busybox-7b57f96db7-hh9bp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- exec busybox-7b57f96db7-lk8kt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- exec busybox-7b57f96db7-56qx6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- exec busybox-7b57f96db7-56qx6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- exec busybox-7b57f96db7-hh9bp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- exec busybox-7b57f96db7-hh9bp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- exec busybox-7b57f96db7-lk8kt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 kubectl -- exec busybox-7b57f96db7-lk8kt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-559225 node add --alsologtostderr -v 5: (21.965526403s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-559225 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp testdata/cp-test.txt ha-559225:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2235594065/001/cp-test_ha-559225.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225:/home/docker/cp-test.txt ha-559225-m02:/home/docker/cp-test_ha-559225_ha-559225-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m02 "sudo cat /home/docker/cp-test_ha-559225_ha-559225-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225:/home/docker/cp-test.txt ha-559225-m03:/home/docker/cp-test_ha-559225_ha-559225-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m03 "sudo cat /home/docker/cp-test_ha-559225_ha-559225-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225:/home/docker/cp-test.txt ha-559225-m04:/home/docker/cp-test_ha-559225_ha-559225-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m04 "sudo cat /home/docker/cp-test_ha-559225_ha-559225-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp testdata/cp-test.txt ha-559225-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2235594065/001/cp-test_ha-559225-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225-m02:/home/docker/cp-test.txt ha-559225:/home/docker/cp-test_ha-559225-m02_ha-559225.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225 "sudo cat /home/docker/cp-test_ha-559225-m02_ha-559225.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225-m02:/home/docker/cp-test.txt ha-559225-m03:/home/docker/cp-test_ha-559225-m02_ha-559225-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m03 "sudo cat /home/docker/cp-test_ha-559225-m02_ha-559225-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225-m02:/home/docker/cp-test.txt ha-559225-m04:/home/docker/cp-test_ha-559225-m02_ha-559225-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m04 "sudo cat /home/docker/cp-test_ha-559225-m02_ha-559225-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp testdata/cp-test.txt ha-559225-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2235594065/001/cp-test_ha-559225-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225-m03:/home/docker/cp-test.txt ha-559225:/home/docker/cp-test_ha-559225-m03_ha-559225.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225 "sudo cat /home/docker/cp-test_ha-559225-m03_ha-559225.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225-m03:/home/docker/cp-test.txt ha-559225-m02:/home/docker/cp-test_ha-559225-m03_ha-559225-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m02 "sudo cat /home/docker/cp-test_ha-559225-m03_ha-559225-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225-m03:/home/docker/cp-test.txt ha-559225-m04:/home/docker/cp-test_ha-559225-m03_ha-559225-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m04 "sudo cat /home/docker/cp-test_ha-559225-m03_ha-559225-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp testdata/cp-test.txt ha-559225-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2235594065/001/cp-test_ha-559225-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225-m04:/home/docker/cp-test.txt ha-559225:/home/docker/cp-test_ha-559225-m04_ha-559225.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225 "sudo cat /home/docker/cp-test_ha-559225-m04_ha-559225.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225-m04:/home/docker/cp-test.txt ha-559225-m02:/home/docker/cp-test_ha-559225-m04_ha-559225-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m02 "sudo cat /home/docker/cp-test_ha-559225-m04_ha-559225-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 cp ha-559225-m04:/home/docker/cp-test.txt ha-559225-m03:/home/docker/cp-test_ha-559225-m04_ha-559225-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 ssh -n ha-559225-m03 "sudo cat /home/docker/cp-test_ha-559225-m04_ha-559225-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-559225 node stop m02 --alsologtostderr -v 5: (19.090419784s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-559225 status --alsologtostderr -v 5: exit status 7 (708.738919ms)

                                                
                                                
-- stdout --
	ha-559225
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-559225-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-559225-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-559225-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:49:33.745117   73703 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:49:33.745375   73703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:49:33.745382   73703 out.go:374] Setting ErrFile to fd 2...
	I1129 08:49:33.745387   73703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:49:33.745640   73703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:49:33.745821   73703 out.go:368] Setting JSON to false
	I1129 08:49:33.745859   73703 mustload.go:66] Loading cluster: ha-559225
	I1129 08:49:33.745959   73703 notify.go:221] Checking for updates...
	I1129 08:49:33.746246   73703 config.go:182] Loaded profile config "ha-559225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:49:33.746261   73703 status.go:174] checking status of ha-559225 ...
	I1129 08:49:33.746674   73703 cli_runner.go:164] Run: docker container inspect ha-559225 --format={{.State.Status}}
	I1129 08:49:33.767298   73703 status.go:371] ha-559225 host status = "Running" (err=<nil>)
	I1129 08:49:33.767325   73703 host.go:66] Checking if "ha-559225" exists ...
	I1129 08:49:33.767596   73703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-559225
	I1129 08:49:33.785969   73703 host.go:66] Checking if "ha-559225" exists ...
	I1129 08:49:33.786223   73703 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:49:33.786270   73703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-559225
	I1129 08:49:33.804171   73703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/ha-559225/id_rsa Username:docker}
	I1129 08:49:33.904660   73703 ssh_runner.go:195] Run: systemctl --version
	I1129 08:49:33.911423   73703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:49:33.923446   73703 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:49:33.979556   73703 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 08:49:33.969966019 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:49:33.980313   73703 kubeconfig.go:125] found "ha-559225" server: "https://192.168.49.254:8443"
	I1129 08:49:33.980350   73703 api_server.go:166] Checking apiserver status ...
	I1129 08:49:33.980401   73703 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 08:49:33.992097   73703 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1233/cgroup
	W1129 08:49:34.000283   73703 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1233/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1129 08:49:34.000332   73703 ssh_runner.go:195] Run: ls
	I1129 08:49:34.004583   73703 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1129 08:49:34.010182   73703 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1129 08:49:34.010211   73703 status.go:463] ha-559225 apiserver status = Running (err=<nil>)
	I1129 08:49:34.010222   73703 status.go:176] ha-559225 status: &{Name:ha-559225 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:49:34.010243   73703 status.go:174] checking status of ha-559225-m02 ...
	I1129 08:49:34.010489   73703 cli_runner.go:164] Run: docker container inspect ha-559225-m02 --format={{.State.Status}}
	I1129 08:49:34.030734   73703 status.go:371] ha-559225-m02 host status = "Stopped" (err=<nil>)
	I1129 08:49:34.030758   73703 status.go:384] host is not running, skipping remaining checks
	I1129 08:49:34.030767   73703 status.go:176] ha-559225-m02 status: &{Name:ha-559225-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:49:34.030793   73703 status.go:174] checking status of ha-559225-m03 ...
	I1129 08:49:34.031076   73703 cli_runner.go:164] Run: docker container inspect ha-559225-m03 --format={{.State.Status}}
	I1129 08:49:34.049369   73703 status.go:371] ha-559225-m03 host status = "Running" (err=<nil>)
	I1129 08:49:34.049395   73703 host.go:66] Checking if "ha-559225-m03" exists ...
	I1129 08:49:34.049667   73703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-559225-m03
	I1129 08:49:34.067704   73703 host.go:66] Checking if "ha-559225-m03" exists ...
	I1129 08:49:34.068006   73703 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:49:34.068073   73703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-559225-m03
	I1129 08:49:34.087259   73703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/ha-559225-m03/id_rsa Username:docker}
	I1129 08:49:34.187256   73703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:49:34.200054   73703 kubeconfig.go:125] found "ha-559225" server: "https://192.168.49.254:8443"
	I1129 08:49:34.200080   73703 api_server.go:166] Checking apiserver status ...
	I1129 08:49:34.200120   73703 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 08:49:34.211009   73703 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W1129 08:49:34.219325   73703 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1129 08:49:34.219382   73703 ssh_runner.go:195] Run: ls
	I1129 08:49:34.223269   73703 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1129 08:49:34.227542   73703 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1129 08:49:34.227565   73703 status.go:463] ha-559225-m03 apiserver status = Running (err=<nil>)
	I1129 08:49:34.227574   73703 status.go:176] ha-559225-m03 status: &{Name:ha-559225-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:49:34.227589   73703 status.go:174] checking status of ha-559225-m04 ...
	I1129 08:49:34.227864   73703 cli_runner.go:164] Run: docker container inspect ha-559225-m04 --format={{.State.Status}}
	I1129 08:49:34.246299   73703 status.go:371] ha-559225-m04 host status = "Running" (err=<nil>)
	I1129 08:49:34.246319   73703 host.go:66] Checking if "ha-559225-m04" exists ...
	I1129 08:49:34.246550   73703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-559225-m04
	I1129 08:49:34.264659   73703 host.go:66] Checking if "ha-559225-m04" exists ...
	I1129 08:49:34.265089   73703 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:49:34.265161   73703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-559225-m04
	I1129 08:49:34.284670   73703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/ha-559225-m04/id_rsa Username:docker}
	I1129 08:49:34.382930   73703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:49:34.395285   73703 status.go:176] ha-559225-m04 status: &{Name:ha-559225-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-559225 node start m02 --alsologtostderr -v 5: (13.773266955s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (105.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 stop --alsologtostderr -v 5
E1129 08:50:26.237722    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-559225 stop --alsologtostderr -v 5: (49.18400735s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 start --wait true --alsologtostderr -v 5
E1129 08:50:50.591212    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:50:50.597724    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:50:50.609139    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:50:50.630626    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:50:50.672082    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:50:50.753802    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:50:50.915205    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:50:51.236700    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:50:51.878345    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:50:53.160307    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:50:55.722227    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:51:00.844208    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:51:11.085686    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:51:31.567043    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-559225 start --wait true --alsologtostderr -v 5: (55.70272473s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (105.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-559225 node delete m03 --alsologtostderr -v 5: (9.757164848s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 stop --alsologtostderr -v 5
E1129 08:51:49.301265    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:52:12.528478    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-559225 stop --alsologtostderr -v 5: (41.477304501s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-559225 status --alsologtostderr -v 5: exit status 7 (119.253225ms)

                                                
                                                
-- stdout --
	ha-559225
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-559225-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-559225-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:52:28.594721   87862 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:52:28.594873   87862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:52:28.594883   87862 out.go:374] Setting ErrFile to fd 2...
	I1129 08:52:28.594890   87862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:52:28.595153   87862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 08:52:28.595358   87862 out.go:368] Setting JSON to false
	I1129 08:52:28.595391   87862 mustload.go:66] Loading cluster: ha-559225
	I1129 08:52:28.595497   87862 notify.go:221] Checking for updates...
	I1129 08:52:28.595790   87862 config.go:182] Loaded profile config "ha-559225": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:52:28.595806   87862 status.go:174] checking status of ha-559225 ...
	I1129 08:52:28.596312   87862 cli_runner.go:164] Run: docker container inspect ha-559225 --format={{.State.Status}}
	I1129 08:52:28.615659   87862 status.go:371] ha-559225 host status = "Stopped" (err=<nil>)
	I1129 08:52:28.615686   87862 status.go:384] host is not running, skipping remaining checks
	I1129 08:52:28.615695   87862 status.go:176] ha-559225 status: &{Name:ha-559225 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:52:28.615736   87862 status.go:174] checking status of ha-559225-m02 ...
	I1129 08:52:28.616093   87862 cli_runner.go:164] Run: docker container inspect ha-559225-m02 --format={{.State.Status}}
	I1129 08:52:28.636459   87862 status.go:371] ha-559225-m02 host status = "Stopped" (err=<nil>)
	I1129 08:52:28.636496   87862 status.go:384] host is not running, skipping remaining checks
	I1129 08:52:28.636505   87862 status.go:176] ha-559225-m02 status: &{Name:ha-559225-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:52:28.636528   87862 status.go:174] checking status of ha-559225-m04 ...
	I1129 08:52:28.636801   87862 cli_runner.go:164] Run: docker container inspect ha-559225-m04 --format={{.State.Status}}
	I1129 08:52:28.655136   87862 status.go:371] ha-559225-m04 host status = "Stopped" (err=<nil>)
	I1129 08:52:28.655187   87862 status.go:384] host is not running, skipping remaining checks
	I1129 08:52:28.655200   87862 status.go:176] ha-559225-m04 status: &{Name:ha-559225-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-559225 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.497187048s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 node add --control-plane --alsologtostderr -v 5
E1129 08:53:34.450082    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-559225 node add --control-plane --alsologtostderr -v 5: (40.385521662s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-559225 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.66s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-169580 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-169580 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (41.660990041s)
--- PASS: TestJSONOutput/start/Command (41.66s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-169580 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-169580 --output=json --user=testUser: (7.947244623s)
--- PASS: TestJSONOutput/stop/Command (7.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-232937 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-232937 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.028353ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b5efc650-f27c-4df2-8cc0-b264b5a00892","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-232937] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"59f52280-daa4-4136-a0f2-18a60ce6b209","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22000"}}
	{"specversion":"1.0","id":"385b72dc-662b-4916-8407-03f198bb1926","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"07eac309-395c-4fdf-be05-7ff0709f1328","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig"}}
	{"specversion":"1.0","id":"79666e92-8ef2-4d07-8a85-4cd8977c9892","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube"}}
	{"specversion":"1.0","id":"a1ea0209-7e6c-40a3-94f7-815c85fdea1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"06843ba1-5477-4b36-bd83-25958c83f140","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a5307413-87d6-4f28-9b2d-2370343cbaca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-232937" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-232937
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-502586 --network=
E1129 08:55:26.241000    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-502586 --network=: (25.785269705s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-502586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-502586
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-502586: (2.142059668s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.95s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-248083 --network=bridge
E1129 08:55:50.594008    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-248083 --network=bridge: (20.93241633s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-248083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-248083
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-248083: (2.002290476s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.95s)

                                                
                                    
x
+
TestKicExistingNetwork (26.64s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1129 08:56:05.658286    9216 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1129 08:56:05.675397    9216 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1129 08:56:05.675478    9216 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1129 08:56:05.675506    9216 cli_runner.go:164] Run: docker network inspect existing-network
W1129 08:56:05.692524    9216 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1129 08:56:05.692554    9216 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1129 08:56:05.692575    9216 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1129 08:56:05.692729    9216 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1129 08:56:05.711861    9216 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-94fc752bc7a7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:ed:43:e0:ad:5a} reservation:<nil>}
I1129 08:56:05.712305    9216 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002d5860}
I1129 08:56:05.712336    9216 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1129 08:56:05.712421    9216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1129 08:56:05.759376    9216 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-401958 --network=existing-network
E1129 08:56:18.291586    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-401958 --network=existing-network: (24.498805184s)
helpers_test.go:175: Cleaning up "existing-network-401958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-401958
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-401958: (2.005273572s)
I1129 08:56:32.281391    9216 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.64s)

                                                
                                    
x
+
TestKicCustomSubnet (22.85s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-300058 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-300058 --subnet=192.168.60.0/24: (20.689550453s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-300058 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-300058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-300058
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-300058: (2.140031173s)
--- PASS: TestKicCustomSubnet (22.85s)

                                                
                                    
x
+
TestKicStaticIP (23.63s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-617317 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-617317 --static-ip=192.168.200.200: (21.347413865s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-617317 ip
helpers_test.go:175: Cleaning up "static-ip-617317" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-617317
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-617317: (2.136316644s)
--- PASS: TestKicStaticIP (23.63s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (47.68s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-955678 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-955678 --driver=docker  --container-runtime=crio: (21.805578084s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-957808 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-957808 --driver=docker  --container-runtime=crio: (19.915150998s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-955678
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-957808
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-957808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-957808
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-957808: (2.340159547s)
helpers_test.go:175: Cleaning up "first-955678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-955678
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-955678: (2.380958404s)
--- PASS: TestMinikubeProfile (47.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-809584 --memory=3072 --mount-string /tmp/TestMountStartserial2280349623/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-809584 --memory=3072 --mount-string /tmp/TestMountStartserial2280349623/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.667208336s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-809584 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-826143 --memory=3072 --mount-string /tmp/TestMountStartserial2280349623/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-826143 --memory=3072 --mount-string /tmp/TestMountStartserial2280349623/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.984471866s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-826143 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-809584 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-809584 --alsologtostderr -v=5: (1.684268846s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-826143 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-826143
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-826143: (1.252199394s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-826143
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-826143: (6.293874017s)
--- PASS: TestMountStart/serial/RestartStopped (7.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-826143 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (68.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-027136 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-027136 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m7.521715021s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (68.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-027136 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-027136 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-027136 -- rollout status deployment/busybox: (2.468642822s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-027136 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-027136 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-027136 -- exec busybox-7b57f96db7-4jdt4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-027136 -- exec busybox-7b57f96db7-74hx6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-027136 -- exec busybox-7b57f96db7-4jdt4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-027136 -- exec busybox-7b57f96db7-74hx6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-027136 -- exec busybox-7b57f96db7-4jdt4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-027136 -- exec busybox-7b57f96db7-74hx6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-027136 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-027136 -- exec busybox-7b57f96db7-4jdt4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-027136 -- exec busybox-7b57f96db7-4jdt4 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-027136 -- exec busybox-7b57f96db7-74hx6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-027136 -- exec busybox-7b57f96db7-74hx6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-027136 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-027136 -v=5 --alsologtostderr: (24.877686956s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (25.52s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-027136 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 cp testdata/cp-test.txt multinode-027136:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 cp multinode-027136:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3656238526/001/cp-test_multinode-027136.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 cp multinode-027136:/home/docker/cp-test.txt multinode-027136-m02:/home/docker/cp-test_multinode-027136_multinode-027136-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136-m02 "sudo cat /home/docker/cp-test_multinode-027136_multinode-027136-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 cp multinode-027136:/home/docker/cp-test.txt multinode-027136-m03:/home/docker/cp-test_multinode-027136_multinode-027136-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136-m03 "sudo cat /home/docker/cp-test_multinode-027136_multinode-027136-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 cp testdata/cp-test.txt multinode-027136-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 cp multinode-027136-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3656238526/001/cp-test_multinode-027136-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 cp multinode-027136-m02:/home/docker/cp-test.txt multinode-027136:/home/docker/cp-test_multinode-027136-m02_multinode-027136.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136 "sudo cat /home/docker/cp-test_multinode-027136-m02_multinode-027136.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 cp multinode-027136-m02:/home/docker/cp-test.txt multinode-027136-m03:/home/docker/cp-test_multinode-027136-m02_multinode-027136-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136-m03 "sudo cat /home/docker/cp-test_multinode-027136-m02_multinode-027136-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 cp testdata/cp-test.txt multinode-027136-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 cp multinode-027136-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3656238526/001/cp-test_multinode-027136-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 cp multinode-027136-m03:/home/docker/cp-test.txt multinode-027136:/home/docker/cp-test_multinode-027136-m03_multinode-027136.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136 "sudo cat /home/docker/cp-test_multinode-027136-m03_multinode-027136.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 cp multinode-027136-m03:/home/docker/cp-test.txt multinode-027136-m02:/home/docker/cp-test_multinode-027136-m03_multinode-027136-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 ssh -n multinode-027136-m02 "sudo cat /home/docker/cp-test_multinode-027136-m03_multinode-027136-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-027136 node stop m03: (1.266188806s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-027136 status: exit status 7 (503.584166ms)

                                                
                                                
-- stdout --
	multinode-027136
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-027136-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-027136-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-027136 status --alsologtostderr: exit status 7 (501.355653ms)

                                                
                                                
-- stdout --
	multinode-027136
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-027136-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-027136-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:00:22.957393  147416 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:00:22.957500  147416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:00:22.957505  147416 out.go:374] Setting ErrFile to fd 2...
	I1129 09:00:22.957509  147416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:00:22.957693  147416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:00:22.957858  147416 out.go:368] Setting JSON to false
	I1129 09:00:22.957880  147416 mustload.go:66] Loading cluster: multinode-027136
	I1129 09:00:22.958077  147416 notify.go:221] Checking for updates...
	I1129 09:00:22.958196  147416 config.go:182] Loaded profile config "multinode-027136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:00:22.958212  147416 status.go:174] checking status of multinode-027136 ...
	I1129 09:00:22.959257  147416 cli_runner.go:164] Run: docker container inspect multinode-027136 --format={{.State.Status}}
	I1129 09:00:22.979865  147416 status.go:371] multinode-027136 host status = "Running" (err=<nil>)
	I1129 09:00:22.979893  147416 host.go:66] Checking if "multinode-027136" exists ...
	I1129 09:00:22.980225  147416 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-027136
	I1129 09:00:22.998489  147416 host.go:66] Checking if "multinode-027136" exists ...
	I1129 09:00:22.998746  147416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:00:22.998789  147416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-027136
	I1129 09:00:23.016860  147416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32904 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/multinode-027136/id_rsa Username:docker}
	I1129 09:00:23.115200  147416 ssh_runner.go:195] Run: systemctl --version
	I1129 09:00:23.121407  147416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:00:23.133533  147416 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:00:23.187644  147416 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-29 09:00:23.178430003 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:00:23.188193  147416 kubeconfig.go:125] found "multinode-027136" server: "https://192.168.67.2:8443"
	I1129 09:00:23.188223  147416 api_server.go:166] Checking apiserver status ...
	I1129 09:00:23.188255  147416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:00:23.199614  147416 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1268/cgroup
	W1129 09:00:23.207961  147416 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1268/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:00:23.208024  147416 ssh_runner.go:195] Run: ls
	I1129 09:00:23.211631  147416 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1129 09:00:23.216384  147416 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1129 09:00:23.216410  147416 status.go:463] multinode-027136 apiserver status = Running (err=<nil>)
	I1129 09:00:23.216422  147416 status.go:176] multinode-027136 status: &{Name:multinode-027136 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 09:00:23.216445  147416 status.go:174] checking status of multinode-027136-m02 ...
	I1129 09:00:23.216730  147416 cli_runner.go:164] Run: docker container inspect multinode-027136-m02 --format={{.State.Status}}
	I1129 09:00:23.234116  147416 status.go:371] multinode-027136-m02 host status = "Running" (err=<nil>)
	I1129 09:00:23.234139  147416 host.go:66] Checking if "multinode-027136-m02" exists ...
	I1129 09:00:23.234407  147416 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-027136-m02
	I1129 09:00:23.252344  147416 host.go:66] Checking if "multinode-027136-m02" exists ...
	I1129 09:00:23.252616  147416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:00:23.252674  147416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-027136-m02
	I1129 09:00:23.270699  147416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/22000-5652/.minikube/machines/multinode-027136-m02/id_rsa Username:docker}
	I1129 09:00:23.369212  147416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:00:23.381008  147416 status.go:176] multinode-027136-m02 status: &{Name:multinode-027136-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1129 09:00:23.381037  147416 status.go:174] checking status of multinode-027136-m03 ...
	I1129 09:00:23.381283  147416 cli_runner.go:164] Run: docker container inspect multinode-027136-m03 --format={{.State.Status}}
	I1129 09:00:23.399251  147416 status.go:371] multinode-027136-m03 host status = "Stopped" (err=<nil>)
	I1129 09:00:23.399270  147416 status.go:384] host is not running, skipping remaining checks
	I1129 09:00:23.399277  147416 status.go:176] multinode-027136-m03 status: &{Name:multinode-027136-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 node start m03 -v=5 --alsologtostderr
E1129 09:00:26.238029    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-027136 node start m03 -v=5 --alsologtostderr: (6.492276344s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-027136
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-027136
E1129 09:00:50.595155    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-027136: (30.117846984s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-027136 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-027136 --wait=true -v=5 --alsologtostderr: (48.658595236s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-027136
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-027136 node delete m03: (4.647836032s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-027136 stop: (28.308467966s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-027136 status: exit status 7 (95.775952ms)

                                                
                                                
-- stdout --
	multinode-027136
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-027136-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-027136 status --alsologtostderr: exit status 7 (94.642802ms)

                                                
                                                
-- stdout --
	multinode-027136
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-027136-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:02:23.206891  157246 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:02:23.207014  157246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:02:23.207026  157246 out.go:374] Setting ErrFile to fd 2...
	I1129 09:02:23.207032  157246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:02:23.207237  157246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:02:23.207391  157246 out.go:368] Setting JSON to false
	I1129 09:02:23.207415  157246 mustload.go:66] Loading cluster: multinode-027136
	I1129 09:02:23.207479  157246 notify.go:221] Checking for updates...
	I1129 09:02:23.207784  157246 config.go:182] Loaded profile config "multinode-027136": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:02:23.207798  157246 status.go:174] checking status of multinode-027136 ...
	I1129 09:02:23.208211  157246 cli_runner.go:164] Run: docker container inspect multinode-027136 --format={{.State.Status}}
	I1129 09:02:23.226861  157246 status.go:371] multinode-027136 host status = "Stopped" (err=<nil>)
	I1129 09:02:23.226889  157246 status.go:384] host is not running, skipping remaining checks
	I1129 09:02:23.226898  157246 status.go:176] multinode-027136 status: &{Name:multinode-027136 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 09:02:23.226942  157246 status.go:174] checking status of multinode-027136-m02 ...
	I1129 09:02:23.227312  157246 cli_runner.go:164] Run: docker container inspect multinode-027136-m02 --format={{.State.Status}}
	I1129 09:02:23.244592  157246 status.go:371] multinode-027136-m02 host status = "Stopped" (err=<nil>)
	I1129 09:02:23.244614  157246 status.go:384] host is not running, skipping remaining checks
	I1129 09:02:23.244620  157246 status.go:176] multinode-027136-m02 status: &{Name:multinode-027136-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (26.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-027136 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-027136 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (26.252481721s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-027136 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (26.85s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-027136
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-027136-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-027136-m02 --driver=docker  --container-runtime=crio: exit status 14 (81.728859ms)

                                                
                                                
-- stdout --
	* [multinode-027136-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-027136-m02' is duplicated with machine name 'multinode-027136-m02' in profile 'multinode-027136'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-027136-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-027136-m03 --driver=docker  --container-runtime=crio: (19.762427644s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-027136
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-027136: exit status 80 (294.048296ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-027136 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-027136-m03 already exists in multinode-027136-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-027136-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-027136-m03: (2.359492756s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.56s)

                                                
                                    
x
+
TestPreload (100.02s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-798642 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-798642 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (46.757823562s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-798642 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-798642 image pull gcr.io/k8s-minikube/busybox: (1.597287424s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-798642
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-798642: (6.076039327s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-798642 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-798642 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (42.960350542s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-798642 image list
helpers_test.go:175: Cleaning up "test-preload-798642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-798642
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-798642: (2.395915702s)
--- PASS: TestPreload (100.02s)

                                                
                                    
x
+
TestScheduledStopUnix (94.35s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-632738 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-632738 --memory=3072 --driver=docker  --container-runtime=crio: (18.833999614s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-632738 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1129 09:05:15.786770  173997 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:05:15.786893  173997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:05:15.786903  173997 out.go:374] Setting ErrFile to fd 2...
	I1129 09:05:15.786908  173997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:05:15.787126  173997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:05:15.787440  173997 out.go:368] Setting JSON to false
	I1129 09:05:15.787569  173997 mustload.go:66] Loading cluster: scheduled-stop-632738
	I1129 09:05:15.787935  173997 config.go:182] Loaded profile config "scheduled-stop-632738": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:05:15.788006  173997 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/config.json ...
	I1129 09:05:15.788198  173997 mustload.go:66] Loading cluster: scheduled-stop-632738
	I1129 09:05:15.788345  173997 config.go:182] Loaded profile config "scheduled-stop-632738": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-632738 -n scheduled-stop-632738
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-632738 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1129 09:05:16.177365  174145 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:05:16.177470  174145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:05:16.177482  174145 out.go:374] Setting ErrFile to fd 2...
	I1129 09:05:16.177487  174145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:05:16.177675  174145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:05:16.177919  174145 out.go:368] Setting JSON to false
	I1129 09:05:16.178099  174145 daemonize_unix.go:73] killing process 174031 as it is an old scheduled stop
	I1129 09:05:16.178195  174145 mustload.go:66] Loading cluster: scheduled-stop-632738
	I1129 09:05:16.178522  174145 config.go:182] Loaded profile config "scheduled-stop-632738": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:05:16.178588  174145 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/config.json ...
	I1129 09:05:16.178753  174145 mustload.go:66] Loading cluster: scheduled-stop-632738
	I1129 09:05:16.178859  174145 config.go:182] Loaded profile config "scheduled-stop-632738": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1129 09:05:16.184815    9216 retry.go:31] will retry after 69.726µs: open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/pid: no such file or directory
I1129 09:05:16.185988    9216 retry.go:31] will retry after 118.769µs: open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/pid: no such file or directory
I1129 09:05:16.187143    9216 retry.go:31] will retry after 214.482µs: open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/pid: no such file or directory
I1129 09:05:16.188276    9216 retry.go:31] will retry after 344.794µs: open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/pid: no such file or directory
I1129 09:05:16.189404    9216 retry.go:31] will retry after 637.287µs: open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/pid: no such file or directory
I1129 09:05:16.190530    9216 retry.go:31] will retry after 754.469µs: open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/pid: no such file or directory
I1129 09:05:16.191641    9216 retry.go:31] will retry after 647.839µs: open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/pid: no such file or directory
I1129 09:05:16.192783    9216 retry.go:31] will retry after 2.449446ms: open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/pid: no such file or directory
I1129 09:05:16.195993    9216 retry.go:31] will retry after 2.044547ms: open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/pid: no such file or directory
I1129 09:05:16.198117    9216 retry.go:31] will retry after 5.411513ms: open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/pid: no such file or directory
I1129 09:05:16.204330    9216 retry.go:31] will retry after 4.546251ms: open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/pid: no such file or directory
I1129 09:05:16.209559    9216 retry.go:31] will retry after 11.968684ms: open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/pid: no such file or directory
I1129 09:05:16.221781    9216 retry.go:31] will retry after 8.840344ms: open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/pid: no such file or directory
I1129 09:05:16.231006    9216 retry.go:31] will retry after 23.332884ms: open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/pid: no such file or directory
I1129 09:05:16.255438    9216 retry.go:31] will retry after 40.969181ms: open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-632738 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1129 09:05:26.241962    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-632738 -n scheduled-stop-632738
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-632738
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-632738 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1129 09:05:42.054062  174781 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:05:42.054328  174781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:05:42.054339  174781 out.go:374] Setting ErrFile to fd 2...
	I1129 09:05:42.054343  174781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:05:42.054679  174781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:05:42.054973  174781 out.go:368] Setting JSON to false
	I1129 09:05:42.055063  174781 mustload.go:66] Loading cluster: scheduled-stop-632738
	I1129 09:05:42.055411  174781 config.go:182] Loaded profile config "scheduled-stop-632738": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:05:42.055490  174781 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/scheduled-stop-632738/config.json ...
	I1129 09:05:42.055693  174781 mustload.go:66] Loading cluster: scheduled-stop-632738
	I1129 09:05:42.055809  174781 config.go:182] Loaded profile config "scheduled-stop-632738": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
E1129 09:05:50.594684    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-632738
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-632738: exit status 7 (81.43252ms)

                                                
                                                
-- stdout --
	scheduled-stop-632738
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-632738 -n scheduled-stop-632738
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-632738 -n scheduled-stop-632738: exit status 7 (78.043825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-632738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-632738
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-632738: (4.027132228s)
--- PASS: TestScheduledStopUnix (94.35s)

                                                
                                    
x
+
TestInsufficientStorage (12.34s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-419805 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-419805 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.837007973s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"75f4de91-4b44-44e1-bf13-373924c0fe9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-419805] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9f0038d-a856-4228-8ea6-1f3b207cfcb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22000"}}
	{"specversion":"1.0","id":"a90613b8-d098-4c9b-bcca-ee14b560d466","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ed861407-f992-4ff0-b212-5d80a0487070","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig"}}
	{"specversion":"1.0","id":"f40fa7dc-0659-4a91-bc85-ae50402a5d96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube"}}
	{"specversion":"1.0","id":"500e274a-dfec-437b-960d-f736b6b16595","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ff7bf724-b535-474e-8e4e-9749d52d7a9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"25d0b1cc-6917-4423-bc82-6bf1c0ffed01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"83b3c525-c1a1-4055-bec6-87fa9dea8f68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8cf34b3b-b789-4aa3-a9da-82b700936218","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b1607a61-3456-431e-9df8-e8b2f9de7506","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5e8f25f9-1109-4f4e-9dde-ec655e25dc33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-419805\" primary control-plane node in \"insufficient-storage-419805\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"00ef66bb-ab1d-40fa-9cc6-f9619a58dda8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"586a312b-d7ae-461c-b280-88f8fae4d17a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9fd14a24-8760-4cf3-97a7-562484d1e92f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-419805 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-419805 --output=json --layout=cluster: exit status 7 (303.153956ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-419805","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-419805","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1129 09:06:41.373426  177303 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-419805" does not appear in /home/jenkins/minikube-integration/22000-5652/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-419805 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-419805 --output=json --layout=cluster: exit status 7 (295.983152ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-419805","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-419805","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1129 09:06:41.670764  177419 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-419805" does not appear in /home/jenkins/minikube-integration/22000-5652/kubeconfig
	E1129 09:06:41.680964  177419 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/insufficient-storage-419805/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-419805" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-419805
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-419805: (1.902169646s)
--- PASS: TestInsufficientStorage (12.34s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (296s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.302761911 start -p running-upgrade-246907 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1129 09:08:29.303068    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.302761911 start -p running-upgrade-246907 --memory=3072 --vm-driver=docker  --container-runtime=crio: (24.8618514s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-246907 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-246907 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.042958155s)
helpers_test.go:175: Cleaning up "running-upgrade-246907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-246907
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-246907: (2.523488885s)
--- PASS: TestRunningBinaryUpgrade (296.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (302.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-665137 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-665137 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.21680405s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-665137
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-665137: (2.078724889s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-665137 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-665137 status --format={{.Host}}: exit status 7 (85.399115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-665137 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-665137 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.995006709s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-665137 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-665137 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-665137 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (93.12998ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-665137] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-665137
	    minikube start -p kubernetes-upgrade-665137 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6651372 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-665137 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-665137 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-665137 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.603264368s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-665137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-665137
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-665137: (4.366046034s)
--- PASS: TestKubernetesUpgrade (302.50s)

                                                
                                    
x
+
TestMissingContainerUpgrade (85.64s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.1426968179 start -p missing-upgrade-134661 --memory=3072 --driver=docker  --container-runtime=crio
E1129 09:07:13.653546    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.1426968179 start -p missing-upgrade-134661 --memory=3072 --driver=docker  --container-runtime=crio: (40.404512698s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-134661
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-134661: (2.03417141s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-134661
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-134661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-134661 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.848973716s)
helpers_test.go:175: Cleaning up "missing-upgrade-134661" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-134661
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-134661: (3.556359984s)
--- PASS: TestMissingContainerUpgrade (85.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-170474 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-170474 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (95.249254ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-170474] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-170474 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-170474 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.772181233s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-170474 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-170474 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-170474 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.150069811s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-170474 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-170474 status -o json: exit status 2 (345.056357ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-170474","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-170474
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-170474: (2.154002277s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-170474 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-170474 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.402916579s)
--- PASS: TestNoKubernetes/serial/Start (6.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22000-5652/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-170474 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-170474 "sudo systemctl is-active --quiet service kubelet": exit status 1 (314.668488ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-170474
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-170474: (1.287360649s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-170474 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-170474 --driver=docker  --container-runtime=crio: (7.544244867s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-170474 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-170474 "sudo systemctl is-active --quiet service kubelet": exit status 1 (319.848974ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-628644 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-628644 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (207.568231ms)

                                                
                                                
-- stdout --
	* [false-628644] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:08:14.103148  203410 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:08:14.103430  203410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:08:14.103443  203410 out.go:374] Setting ErrFile to fd 2...
	I1129 09:08:14.103449  203410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:08:14.103745  203410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5652/.minikube/bin
	I1129 09:08:14.104321  203410 out.go:368] Setting JSON to false
	I1129 09:08:14.105609  203410 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3046,"bootTime":1764404248,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:08:14.105672  203410 start.go:143] virtualization: kvm guest
	I1129 09:08:14.111339  203410 out.go:179] * [false-628644] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:08:14.112939  203410 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:08:14.112927  203410 notify.go:221] Checking for updates...
	I1129 09:08:14.115726  203410 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:08:14.116958  203410 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5652/kubeconfig
	I1129 09:08:14.118314  203410 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5652/.minikube
	I1129 09:08:14.119766  203410 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:08:14.124487  203410 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:08:14.126734  203410 config.go:182] Loaded profile config "cert-expiration-836438": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:08:14.126920  203410 config.go:182] Loaded profile config "cert-options-207443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:08:14.127046  203410 config.go:182] Loaded profile config "force-systemd-env-076374": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:08:14.127176  203410 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:08:14.154300  203410 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:08:14.154494  203410 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:08:14.226682  203410 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:82 SystemTime:2025-11-29 09:08:14.21288729 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:08:14.226828  203410 docker.go:319] overlay module found
	I1129 09:08:14.228524  203410 out.go:179] * Using the docker driver based on user configuration
	I1129 09:08:14.233006  203410 start.go:309] selected driver: docker
	I1129 09:08:14.233031  203410 start.go:927] validating driver "docker" against <nil>
	I1129 09:08:14.233049  203410 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:08:14.235085  203410 out.go:203] 
	W1129 09:08:14.236622  203410 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1129 09:08:14.237903  203410 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-628644 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-628644

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-628644

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-628644

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-628644

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-628644

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-628644

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-628644

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-628644

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-628644

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-628644

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-628644

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-628644" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-628644" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:07:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-836438
contexts:
- context:
cluster: cert-expiration-836438
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:07:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-836438
name: cert-expiration-836438
current-context: ""
kind: Config
users:
- name: cert-expiration-836438
user:
client-certificate: /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/cert-expiration-836438/client.crt
client-key: /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/cert-expiration-836438/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-628644

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-628644"

                                                
                                                
----------------------- debugLogs end: false-628644 [took: 4.752665571s] --------------------------------
helpers_test.go:175: Cleaning up "false-628644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-628644
--- PASS: TestNetworkPlugins/group/false (5.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (286.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.86760566 start -p stopped-upgrade-355524 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.86760566 start -p stopped-upgrade-355524 --memory=3072 --vm-driver=docker  --container-runtime=crio: (22.263960231s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.86760566 -p stopped-upgrade-355524 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.86760566 -p stopped-upgrade-355524 stop: (1.940350904s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-355524 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1129 09:10:26.237759    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/addons-053273/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-355524 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m22.257158025s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (286.46s)

                                                
                                    
x
+
TestPause/serial/Start (41.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-295501 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-295501 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (41.652505901s)
--- PASS: TestPause/serial/Start (41.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.16s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-295501 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-295501 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.142750776s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-628644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-628644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.543477722s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-628644 "pgrep -a kubelet"
I1129 09:12:32.797974    9216 config.go:182] Loaded profile config "auto-628644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-628644 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-97v5b" [0c8f89c5-89bc-4ebc-9c83-a2f3b3b84510] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-97v5b" [0c8f89c5-89bc-4ebc-9c83-a2f3b3b84510] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.003447184s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-628644 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-628644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-628644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (40.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-628644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-628644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (40.014046315s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (40.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-628644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-628644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (52.678103314s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-355524
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-355524: (1.11453151s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-628644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-628644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (54.608210888s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (61.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-628644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-628644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m1.554443498s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (61.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-mrkk8" [af989f8e-8d18-4abf-8675-13950576ff7a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005371979s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-628644 "pgrep -a kubelet"
I1129 09:13:47.131724    9216 config.go:182] Loaded profile config "kindnet-628644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-628644 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jzqs9" [01b396c2-14c2-4485-a55e-e50c7402c142] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jzqs9" [01b396c2-14c2-4485-a55e-e50c7402c142] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.003865879s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-628644 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-628644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-628644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-2lkkn" [0d8b2ad5-28b8-44f1-a0ef-58b460442279] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-2lkkn" [0d8b2ad5-28b8-44f1-a0ef-58b460442279] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004910421s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-628644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-628644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (51.734173287s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-628644 "pgrep -a kubelet"
I1129 09:14:18.372464    9216 config.go:182] Loaded profile config "calico-628644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-628644 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cwsnh" [62794536-a9e2-4389-b7e0-be015fd26866] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cwsnh" [62794536-a9e2-4389-b7e0-be015fd26866] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003580933s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-628644 "pgrep -a kubelet"
I1129 09:14:27.251384    9216 config.go:182] Loaded profile config "custom-flannel-628644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-628644 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qpwfk" [fd41f539-70df-4d72-8074-05fa7a471592] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qpwfk" [fd41f539-70df-4d72-8074-05fa7a471592] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.005785873s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-628644 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-628644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-628644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-628644 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-628644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-628644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-628644 "pgrep -a kubelet"
I1129 09:14:40.992399    9216 config.go:182] Loaded profile config "enable-default-cni-628644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-628644 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-frk5b" [c05d41ac-a3ad-454f-8c08-82227afbb096] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-frk5b" [c05d41ac-a3ad-454f-8c08-82227afbb096] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004347043s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-628644 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-628644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-628644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (32.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-628644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-628644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (32.674945059s)
--- PASS: TestNetworkPlugins/group/bridge/Start (32.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (54.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-680646 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-680646 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (54.60759162s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (54.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-4x76w" [a827c09d-ba5d-49dc-86b1-b721e784746b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003693383s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-897274 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-897274 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.098077452s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-628644 "pgrep -a kubelet"
I1129 09:15:15.475721    9216 config.go:182] Loaded profile config "flannel-628644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-628644 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c6kr9" [f51a5ea5-917e-476f-9ba2-803254a407be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c6kr9" [f51a5ea5-917e-476f-9ba2-803254a407be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004574206s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-628644 "pgrep -a kubelet"
I1129 09:15:23.666215    9216 config.go:182] Loaded profile config "bridge-628644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-628644 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5zlz9" [1f4db8a9-cc6d-44ed-96bc-c46acd99bef5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5zlz9" [1f4db8a9-cc6d-44ed-96bc-c46acd99bef5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004511895s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-628644 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-628644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-628644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-628644 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-628644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-628644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)
E1129 09:17:43.223333    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/auto-628644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (44.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1129 09:15:50.594020    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/functional-137675/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.294507109s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (44.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-680646 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [448319e8-daf0-4564-b243-93ff2f707e47] Pending
helpers_test.go:352: "busybox" [448319e8-daf0-4564-b243-93ff2f707e47] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [448319e8-daf0-4564-b243-93ff2f707e47] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004657347s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-680646 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (37.857406026s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (37.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-680646 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-680646 --alsologtostderr -v=3: (16.076296829s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-897274 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3251ffcc-aef4-4718-b927-af59fc9befca] Pending
helpers_test.go:352: "busybox" [3251ffcc-aef4-4718-b927-af59fc9befca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3251ffcc-aef4-4718-b927-af59fc9befca] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004785456s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-897274 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-897274 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-897274 --alsologtostderr -v=3: (16.253624578s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680646 -n old-k8s-version-680646
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680646 -n old-k8s-version-680646: exit status 7 (99.134116ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-680646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-680646 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-680646 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.287701381s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680646 -n old-k8s-version-680646
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-897274 -n no-preload-897274
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-897274 -n no-preload-897274: exit status 7 (84.718843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-897274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-160987 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f749e7c0-d4f3-41c1-987c-5653a82e08e5] Pending
helpers_test.go:352: "busybox" [f749e7c0-d4f3-41c1-987c-5653a82e08e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f749e7c0-d4f3-41c1-987c-5653a82e08e5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003187745s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-160987 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-897274 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-897274 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (47.036049676s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-897274 -n no-preload-897274
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-632243 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2d48cacb-d056-407e-9a3b-3c0ac0e7456f] Pending
helpers_test.go:352: "busybox" [2d48cacb-d056-407e-9a3b-3c0ac0e7456f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2d48cacb-d056-407e-9a3b-3c0ac0e7456f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005704903s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-632243 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-160987 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-160987 --alsologtostderr -v=3: (18.887370234s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-632243 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-632243 --alsologtostderr -v=3: (17.005749987s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (17.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-160987 -n embed-certs-160987
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-160987 -n embed-certs-160987: exit status 7 (97.130988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-160987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-160987 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.265860299s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-160987 -n embed-certs-160987
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-632243 -n default-k8s-diff-port-632243
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-632243 -n default-k8s-diff-port-632243: exit status 7 (98.04605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-632243 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (43.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-632243 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.464772482s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-632243 -n default-k8s-diff-port-632243
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (43.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mn66t" [f5d4707e-ce09-4732-98b6-607cdc8bd1ff] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004334237s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mn66t" [f5d4707e-ce09-4732-98b6-607cdc8bd1ff] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004138078s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-680646 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6fjrq" [66ea79bb-5692-472d-947c-7f67b687560c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003383752s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-680646 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6fjrq" [66ea79bb-5692-472d-947c-7f67b687560c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004148195s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-897274 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-897274 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-020433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1129 09:17:32.969408    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/auto-628644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:17:32.975952    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/auto-628644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:17:32.987383    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/auto-628644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:17:33.009666    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/auto-628644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:17:33.051258    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/auto-628644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:17:33.133261    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/auto-628644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:17:33.294789    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/auto-628644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:17:33.616586    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/auto-628644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-020433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (29.693500405s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-26vff" [7896abb3-c3b1-4280-9b0c-76b64c1ecdc9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003351612s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-97f9m" [68407194-1c54-4edc-b4f5-ff6610dabb97] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003765846s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-26vff" [7896abb3-c3b1-4280-9b0c-76b64c1ecdc9] Running
E1129 09:17:53.465396    9216 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/auto-628644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003143495s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-632243 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-632243 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-97f9m" [68407194-1c54-4edc-b4f5-ff6610dabb97] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003926705s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-160987 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-160987 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (17.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-020433 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-020433 --alsologtostderr -v=3: (17.972202678s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (17.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-020433 -n newest-cni-020433
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-020433 -n newest-cni-020433: exit status 7 (80.864798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-020433 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-020433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-020433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.192204792s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-020433 -n newest-cni-020433
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-020433 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-628644 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-628644

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-628644

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-628644

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-628644

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-628644

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-628644

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-628644

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-628644

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-628644

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-628644

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-628644

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-628644" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-628644" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:07:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-836438
contexts:
- context:
cluster: cert-expiration-836438
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:07:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-836438
name: cert-expiration-836438
current-context: ""
kind: Config
users:
- name: cert-expiration-836438
user:
client-certificate: /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/cert-expiration-836438/client.crt
client-key: /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/cert-expiration-836438/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-628644

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-628644"

                                                
                                                
----------------------- debugLogs end: kubenet-628644 [took: 4.60006662s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-628644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-628644
--- SKIP: TestNetworkPlugins/group/kubenet (4.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-628644 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-628644" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-628644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-628644" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-5652/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:07:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-836438
contexts:
- context:
cluster: cert-expiration-836438
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:07:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-836438
name: cert-expiration-836438
current-context: ""
kind: Config
users:
- name: cert-expiration-836438
user:
client-certificate: /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/cert-expiration-836438/client.crt
client-key: /home/jenkins/minikube-integration/22000-5652/.minikube/profiles/cert-expiration-836438/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-628644

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-628644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-628644"

                                                
                                                
----------------------- debugLogs end: cilium-628644 [took: 3.983310729s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-628644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-628644
--- SKIP: TestNetworkPlugins/group/cilium (4.16s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-327778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-327778
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard